Skip to content

Reference

Auto-generated from the source via mkdocstrings. If something here looks wrong, fix the docstring in src/chill_out/.

Top-level package

chill_out

chill-out — manage cooldown for package dependencies to avoid zero-day supply chain vulnerabilities.

STATE_FILENAME module-attribute

STATE_FILENAME = '.chill-out-state.json'

Name of the state file at the project root.

AppliedFix dataclass

A single fix that was successfully written into the project's manifest.

Pairs the original FixAction with the literal value that landed in the manifest. The pinned_spec may differ from action.version for FixStyle.COMPATIBLE (e.g. "^1.2.3" for npm or "foo>=1.0,<2.0.0" for pypi); recording the literal value is what makes drift detection on the next fix run possible. manifest_path records where the entry landed, relative to the ecosystem's project root, so the next run can revisit the exact same site.

AppliedFixes dataclass

Structured outcome of an apply_fixes or apply_override_fixes call.

entries holds one AppliedFix per action that was actually written, in the order they were applied. log is the human-readable list of changes intended for CLI output, preserving the same shape every ecosystem produced before structured outputs existed.

AvoidingRelease dataclass

Snapshot of the release that triggered a pin, captured for explainability.

Stored alongside each ManagedPin so future readers can see why the pin exists without re-running a check. None of these fields are consulted on cleanup; they are pure metadata.

CheckReport dataclass

Aggregated outcome of a check run.

ChillOutConfig pydantic-model

Bases: BaseModel

Resolved chill-out configuration.

Built from a single config source by load_config. Fields the source didn't specify are filled from module-level defaults; for cooldown_days specifically, missing release types are filled per-key from DEFAULT_COOLDOWN_DAYS.

Config:

  • frozen: True
  • extra: forbid

Fields:

Validators:

  • _validate_cooldown_dayscooldown_days
  • _validate_include_groupsinclude_groups
  • _validate_fix_stylefix_style
include_group_set property
include_group_set: frozenset[DependencyGroup]

The configured include_groups as a set, for fast membership checks.

coerce_days classmethod
coerce_days(raw: Any) -> dict[ReleaseType, int]

Map a day-config dict into a typed ReleaseType -> int map.

Keys must match a ReleaseType value (major, minor, patch, default, case-insensitive). Unknown keys raise ConfigError so user typos surface immediately instead of being silently ignored. Non-integer values also raise ConfigError.

coerce_fix_style classmethod
coerce_fix_style(raw: Any, *, source: str) -> FixStyle | None

Map a raw fix-style value into a typed FixStyle.

Returns None when the value is missing so callers can distinguish "not configured" from "explicitly set". Unknown names raise ConfigError with the list of valid choices.

coerce_groups classmethod
coerce_groups(raw: Any, *, source: str) -> tuple[DependencyGroup, ...] | None

Map a list of group names into a typed tuple of DependencyGroup.

Returns None when the value is missing so callers can distinguish "not configured" from "explicitly empty". An explicit empty list is accepted and means "check nothing"; the runner will produce an empty report in that case. Unknown group names raise ConfigError.

for_release_type
for_release_type(rel_type: ReleaseType) -> int

Return the cooldown threshold (days) for the given release type.

The cooldown_days map is always fully populated thanks to per-key gap-fill in the field validator, so a direct lookup is safe for every ReleaseType member.

ChillOutError

Bases: Buzz

Base exception class for all chill-out errors.

exit_code class-attribute instance-attribute
exit_code: ExitCode = GENERAL_ERROR

Exit code used when the error reaches the CLI handler.

subject class-attribute instance-attribute
subject: str | None = None

Subject shown in the user-facing error message.

ChillOutState dataclass

Aggregate of every pin chill-out is currently managing for one project.

Loaded from .chill-out-state.json at the start of a fix run, replaced wholesale at the end. The dataclass itself is mutable so the runner can append entries as they are applied; the ManagedPin entries inside are frozen.

delete
delete(root: Path) -> None

Remove the state file from disk if it exists.

Used when a fix run produces no managed pins, so we do not leave behind an empty file that suggests we are still tracking something.

empty classmethod
empty() -> ChillOutState

Return a fresh, empty state with last_run_at set to now.

load classmethod
load(root: Path) -> ChillOutState

Read the state file at root / STATE_FILENAME.

Returns an empty state when the file is simply absent (the common, expected case for a first run). Every other failure mode halts: chill-out's bookkeeping is too important to silently discard. The file is chill-out's own output, so any read failure points at a bug, a partial write, a permissions problem, or a version mismatch — none of which should be papered over.

Raises:

Type Description
StateFileUnreadableError

The file exists but cannot be read (permissions, I/O, file vanished between is_file() and read_text()).

StateFileCorruptError

The file is not valid JSON.

StateSchemaVersionError

The file's schema_version is missing or unknown to this chill-out (older binary against newer file, hand-edit, etc.).

StateValidationError

The file parses as JSON and carries a known schema_version, but one or more fields are missing, mistyped, or carry unexpected extra keys.

save
save(root: Path) -> None

Write the current state to root / STATE_FILENAME.

The output is pretty-printed JSON with a trailing newline so it diffs cleanly under version control. Datetimes are rendered as RFC 3339 / ISO 8601 strings via the Pydantic field serializers in state.schema.

CleanupReport dataclass

Outcome of cleanup_managed_pins.

Each list holds the ManagedPin records the runner attempted to remove during cleanup, grouped by the result the ecosystem returned. removed entries were successfully cleaned out of the manifest. drifted entries are still present in the manifest but their value differs from what chill-out wrote, so the ecosystem left them alone and the runner has dropped them from state. orphan entries were no longer in the manifest at all and have also been dropped from state.

ConfigError

Bases: ChillOutError

Indicates a problem reading or parsing chill-out configuration.

CooldownViolation

Bases: ChillOutError

Raised at the end of a check run when one or more cooldown violations are found.

DependencyGroup

Bases: AutoNameEnum, LowerCaseMixin

Semantic dependency group names used uniformly across ecosystems.

Ecosystem

Bases: Protocol

Pluggable backend for one package ecosystem (npm, pypi, ...).

The Protocol is structural, so a backend only has to expose the right methods to satisfy it; chill-out's own backends inherit explicitly so type checkers flag any drift at the class definition rather than at a call site. Each backend owns a root directory (the project being audited) and advertises its kind, then implements every method below. Project detection lives on a separate EcosystemDetector so the registry can ask "which ecosystem applies?" without having to construct an instance first.

Backends also speak directly to their registry: fetch_package and fetch_version_manifest are async methods that take an httpx.AsyncClient per call so the caller (typically a RegistryClient) owns the session and the cache.

apply_fixes
apply_fixes(actions: list[FixAction]) -> AppliedFixes

Apply the given fix actions to the project.

Returns an AppliedFixes carrying one AppliedFix per entry actually written to the project's manifests, plus a list of human-readable log lines describing the changes for the CLI to surface. The per-entry records capture the literal pinned_spec written to disk so the next run can detect whether the user has since edited it (drift) and clean stale entries up before planning fresh fixes.

apply_override_fixes
apply_override_fixes(actions: list[FixAction]) -> AppliedFixes | None

Apply fixes via the ecosystem's override mechanism.

Used as a fallback when a normal direct pin doesn't dislodge a violating version (typically because it stays hoisted at a parent level the direct pin can't reach). The exact mechanism varies by ecosystem (see supports_overrides for a survey); the contract here is the same regardless of which one a backend reaches for.

Returns an AppliedFixes carrying one entry per override actually written plus human-readable log lines, or None when the ecosystem doesn't support an override mechanism.

fetch_package async
fetch_package(name: str, http: AsyncClient) -> PackageInfo | None

Return all release info for name, or None if it cannot be retrieved.

Caching and in-flight dedupe live one layer up in RegistryClient; backends do neither and just translate one HTTP call into a PackageInfo.

fetch_version_manifest async
fetch_version_manifest(name: str, version: str, http: AsyncClient) -> VersionManifest | None

Return the dependency declarations for a single (name, version) pair.

Used by principal-rollback to discover which transitive ranges a candidate principal version declares. Returns None if the manifest cannot be retrieved.

load_installed
load_installed() -> list[InstalledPackage]

Enumerate every package in the project's lockfile.

Returns the full resolved dependency set, principals and transitives alike. Each InstalledPackage carries enough context (via_chain, groups, member_owners) for downstream filters and fix planning to tell them apart.

parse_version
parse_version(version: str) -> ParsedVersion | None

Parse a version string the way this ecosystem parses one.

Returns None for inputs that don't fit the ecosystem's version grammar; the cooldown engine treats that as "skip this candidate" rather than raising, so a single weird release never blocks the rest of the search. The returned ParsedVersion carries everything the engine needs (release segments, pre-release flag, and an opaque sort key) without the engine having to know which flavor of version it's looking at.

range_satisfies
range_satisfies(version: str, range_spec: str) -> bool

Return True if version satisfies the ecosystem-specific range_spec.

Used by principal-rollback to test whether a candidate principal's declared range admits the safe transitive version.

regenerate_lockfile
regenerate_lockfile() -> str

Recompute the project's lockfile from its current manifests.

Used by the fix workflow after a cleanup pass that removed stale managed pins but did not apply any fresh fixes (the apply step regenerates the lockfile on its own when it runs). Returns a short human-readable line describing the action taken so the CLI can surface it in its log output.

Implementations should raise EcosystemError if regeneration fails.

remove_managed_pin
remove_managed_pin(pin: ManagedPin) -> RemovalOutcome

Try to undo a previously-applied managed pin from this project's manifests.

Used by the fix workflow to clean stale pins before computing a new round of fixes, so cooldowns that have elapsed in the meantime do not leave their pins behind.

Implementations look up pin.package at the site recorded in pin.manifest_path (interpreted relative to self.root) using the appropriate mechanism for pin.mechanism, and:

  • Return RemovalOutcome.REMOVED if the entry is still present and matches the recorded pin.pinned_spec. The entry is deleted in place.
  • Return RemovalOutcome.DRIFTED if the entry is present but its value differs from the recorded value. Implementations leave the entry untouched; the caller is expected to drop the pin from state and warn the user.
  • Return RemovalOutcome.ORPHAN if the entry is no longer present at all. Implementations leave the manifest alone; the caller drops the pin silently.

Implementations must not run lockfile regeneration; the runner orchestrates that step once after the full batch of removals.

supports_overrides
supports_overrides() -> bool

Return True if this ecosystem implements an override mechanism.

Most package managers expose some flavor of "force one resolution everywhere regardless of who declared it" knob (npm overrides, yarn resolutions, pnpm pnpm.overrides, uv override-dependencies, cargo [patch], go replace, maven dependencyManagement, gradle resolutionStrategy.force). A handful of others, notably bundler and composer, don't, so backends for those ecosystems return False here and the runner falls back to plain direct pins.

workspace_topology
workspace_topology() -> WorkspaceTopology | None

Detect a multi-member workspace and return its layout.

Returns None for standalone (single-root) projects or when no workspace declaration is present.

EcosystemDetector

Bases: Protocol

Probe that reports whether an ecosystem applies to a given project root.

Detectors are stateless: instantiated once and reused. The registry walks its detectors in order and asks each one whether the project at root looks like its ecosystem (npm sees a package.json, pypi sees a pyproject.toml, and so on). The matching detector's paired ecosystem class is then constructed for that root.

Keeping detection on its own object decouples "should we use this ecosystem?" from "how do we drive it?", which keeps the Ecosystem protocol focused purely on instance-level work.

The Protocol is structural, so a detector only has to expose detect to satisfy it; chill-out's own detectors inherit explicitly so type checkers flag any drift at the class definition rather than at a call site.

detect
detect(root: Path) -> bool

Return True if this ecosystem applies to the given project root.

EcosystemError

Bases: ChillOutError

Indicates a problem detecting or operating on a project ecosystem.

EcosystemKind

Bases: AutoNameEnum, LowerCaseMixin

Supported package ecosystems.

ExitCode

Bases: IntEnum

Exit codes returned by the CLI.

FixAction dataclass

A single change to apply when running chill-out fix.

Both direct and transitive violations land in the same shape: a pin of package to version written to the project's primary manifest (project.dependencies for pypi, dependencies for npm). Transitive pins ride along as direct entries; the ecosystem resolver hoists them.

style controls how the new constraint is rendered into the manifest. See chill_out.constants.FixStyle for the available choices. Override-style actions (via_overrides=True) are always written as exact pins regardless of style, since the whole point of an override is to dodge a specific just-released version.

When via_overrides is True the pin should be applied via the ecosystem's "force every transitive copy" mechanism instead of a direct dependency entry. The runner sets this for shared transitive violations in workspace contexts where a member-level direct pin cannot dislodge a sibling-shared copy.

FixPlan dataclass

The result of planning fixes for a check report.

FixStyle

Bases: AutoNameEnum, LowerCaseMixin

How chill-out fix writes the new version constraint into the project manifest.

Override-style fixes (via_overrides=True) always pin exactly regardless of this setting; the entire reason an override exists is to pin away from a specific version that just landed.

InstalledPackage dataclass

A single installed dependency that should be checked against the cooldown rules.

groups class-attribute instance-attribute
groups: tuple[DependencyGroup, ...] = ()

Semantic groups this installation belongs to.

For principals (via_chain empty), this is the set of declaration sections the package appears in (a package can be listed in more than one section, e.g. both dependencies and peerDependencies). For transitives, this is the union of the groups of every top-level dependency that pulls the install into the tree, matching the "included if reachable through any included group" semantic the runner uses to decide which packages to check.

Empty tuple means the ecosystem backend didn't attribute the install to any group (treated as "unknown" -- always included).

is_shared property
is_shared: bool

True when more than one workspace member pulls this installation in.

member_owners class-attribute instance-attribute
member_owners: tuple[str, ...] = ()

Names of workspace members whose dependency subtree includes this installation.

Empty tuple in single-project (non-workspace) mode. In a workspace, this lists every member that pulls the package in (directly or transitively). More than one entry means the version is shared across siblings -- a direct pin in any single member's manifest may not dislodge it because the others still need it.

via property
via: str | None

The principal dependency at the top of the chain, if this is a transitive dep.

via_chain class-attribute instance-attribute
via_chain: tuple[str, ...] = ()

Reverse path from this package up to the principal dependency that pulled it in.

Empty tuple means the package is a principal (declared directly in pyproject/package.json). The first element is the immediate parent, the last is the principal.

ManagedPin dataclass

A single pin or override that chill-out wrote into the project.

manifest_path is recorded relative to the project root so the state file stays portable across checkouts. pinned_spec is the literal string chill-out wrote into the manifest at the entry's value position (e.g. "lodash==4.17.20" or "^4.17.20"). On cleanup the value currently at the site is compared against this one to detect drift.

NpmDetector

Bases: EcosystemDetector

Detector for npm projects: a package.json at the project root marks one.

NpmEcosystem

Bases: Ecosystem

Ecosystem backend for npm projects.

apply_fixes
apply_fixes(actions: list[FixAction]) -> AppliedFixes

Apply pins. Routes via_overrides actions through apply_override_fixes.

Splits the incoming actions into two groups based on the via_overrides flag, which the planner sets for shared transitive violations in workspace contexts. Direct pins land in self.root's package.json dependencies; override pins go through the workspace-root override path. Both groups trigger their own npm install, in this order: write direct pins first, then npm install from the member, then write overrides at the workspace root, then npm install from there.

When the override path returns None (the planner tagged via_overrides but no workspace root could be located), the action falls back to a direct pin so it isn't silently lost. That fallback is genuinely defensive: with the current planner and the two shipping ecosystems, the override path always resolves when the planner asked for it.

apply_override_fixes
apply_override_fixes(actions: list[FixAction]) -> AppliedFixes | None

Force transitive versions via npm's overrides field.

Direct pins in dependencies only affect what the project's own code resolves to. When a violating version is hoisted at the workspace-root node_modules (where a different consumer in the tree pulled it in), a direct pin in a workspace-member's package.json can leave that root copy untouched. overrides is npm's blessed mechanism for forcing one resolution everywhere regardless of who declared it.

Overrides must live in the workspace root's package.json to apply tree-wide, so this writes to the directory that owns the lockfile rather than self.root (which may be a workspace member). When that root manifest doesn't exist, return None so the caller can fall back to direct pinning.

fetch_package async
fetch_package(name: str, http: AsyncClient) -> PackageInfo | None

Fetch all release timestamps for a package from the npm registry.

Returns None if the package is missing (404). Raises RegistryError on transport failures, non-2xx responses other than 404, non-JSON bodies, or any drift in the response shape that fails Pydantic validation.

fetch_version_manifest async
fetch_version_manifest(name: str, version: str, http: AsyncClient) -> VersionManifest | None

Fetch dependency declarations for {name}@{version} from the npm registry.

Returns None for 404 responses. Merges dependencies and peerDependencies into a single deps map; npm treats peer deps as runtime constraints just like regular deps for resolution purposes, so the cooldown engine should see both when checking whether a safe transitive can be hoisted.

load_installed
load_installed() -> list[InstalledPackage]

Load the full dependency tree from npm list.

The returned list contains every package npm reports as installed, principals (top-level installs) and transitives alike. Each InstalledPackage is keyed by (name, version) because npm routinely installs multiple copies of the same package at different versions in different branches of node_modules; each copy actually loads at runtime for whichever code requires it, so we report them independently.

The work happens in three phases that mirror the pypi backend's shape:

  1. Run npm list to materialize the dependency tree, optionally also at the workspace root for cross-member ownership attribution.
  2. Read the project's own package.json to learn which top-level names belong to which semantic group.
  3. Walk the tree once per top-level entry to attribute every reachable (name, version) to its principal's groups, then a second walk to assemble the actual InstalledPackage records with their via_chain and ownership metadata.
parse_version
parse_version(version: str) -> ParsedVersion | None

Parse a version string with strict semver semantics.

npm publishes its registry data in semver form (MAJOR.MINOR.PATCH with optional -prerelease and +build), so anything that doesn't fit that grammar gets None. The cooldown engine treats None as "skip this candidate" rather than raising, so a non-semver oddity like a date-tagged version doesn't block the rest of the search.

The returned ParsedVersion carries the original string verbatim so safe versions round-trip back through fix actions in the exact form the registry published.

The sort key wraps the parsed semver.Version itself in a single-element tuple. semver.Version already compares the way npm expects (pre-releases sort before their final version, build metadata is ignored), so we don't need anything custom here.

range_satisfies
range_satisfies(version: str, range_spec: str) -> bool

Check whether version satisfies an npm semver range_spec.

Shells out to node -e "require('semver').satisfies(...)". If node or the semver package isn't available, conservatively returns True (the original script's "assume compatible" fallback for transitive deps with no discoverable range).

regenerate_lockfile
regenerate_lockfile() -> str

Recompute package-lock.json by running npm install from the project root.

remove_managed_pin
remove_managed_pin(pin: ManagedPin) -> RemovalOutcome

Reverse a previously-applied managed pin from the project's package.json.

For PinMechanism.DIRECT this removes the entry from dependencies (and the parallel devDependencies, optionalDependencies, peerDependencies blocks if the pin landed there). For PinMechanism.OVERRIDE this removes the entry from overrides at the recorded manifest path.

See Ecosystem.remove_managed_pin for outcome semantics.

workspace_topology
workspace_topology() -> WorkspaceTopology | None

Detect an npm workspace by reading the lockfile-rooted package.json.

Walks up to find the workspace root (the directory that owns the lockfile, which may be self.root itself or an ancestor for a member project). If the root's package.json declares a workspaces field, expand the globs against the root directory and read each member's name from its own package.json.

Returns None when there's no lockfile, no workspaces field, or none of the globs resolve to a directory with a readable package.json.

The work is split into _locate_workspace_root, which finds the directory that owns the lockfile and reads its package.json, and _discover_workspace_members, which expands the glob patterns and assembles the name -> directory map. Both helpers are static so they can be exercised independently of a live NpmEcosystem instance.

PackageInfo dataclass

All releases known for a package, keyed by version string.

published_at
published_at(version: str) -> pendulum.DateTime | None

Return the publish timestamp for the given version, if known.

PackageRelease dataclass

A single released version of a package, with its publish timestamp.

yanked reflects the registry's withdraw signal: a yanked PyPI release (every artifact marked yanked) or an npm version that's been unpublished (present in the time map but missing from versions). Yanked releases still appear in the registry response so historical resolves keep working, but chill-out treats them as unsafe upgrade targets.

PinMechanism

Bases: AutoNameEnum, LowerCaseMixin

How a managed pin is realized in the project's manifests.

PypiDetector

Bases: EcosystemDetector

Detector for pypi projects: a pyproject.toml at the project root marks one.

PypiEcosystem

Bases: Ecosystem

Ecosystem backend for Python projects using uv + pyproject.toml.

apply_fixes
apply_fixes(actions: list[FixAction]) -> AppliedFixes

Apply pins. Routes via_overrides actions through apply_override_fixes.

Direct pins are written into self.root's pyproject.toml and validated with uv lock. Override pins go through the workspace root's [tool.uv].override-dependencies field and trigger a workspace-wide uv lock to recompute the resolution.

apply_override_fixes
apply_override_fixes(actions: list[FixAction]) -> AppliedFixes | None

Force transitive versions via uv's override-dependencies mechanism.

Writes one entry per action to [tool.uv].override-dependencies in the workspace root's pyproject.toml (or self.root when there's no workspace), then runs uv lock from that directory to recompute the workspace-wide resolution. Returns an AppliedFixes on success, or None when no usable workspace root could be located.

fetch_package async
fetch_package(name: str, http: AsyncClient) -> PackageInfo | None

Fetch all releases and their upload timestamps for a package from PyPI.

The PyPI JSON API returns one entry per uploaded artifact; we take the earliest upload time for each version as its publish date. Releases with no surviving uploads (empty artifact list) are dropped from the result. Schema validation guarantees every artifact carries at least one usable timestamp, so any release with at least one artifact will produce a PackageRelease.

fetch_version_manifest async
fetch_version_manifest(name: str, version: str, http: AsyncClient) -> VersionManifest | None

Fetch the dependency declarations for a single PyPI release.

Pulls info.requires_dist from the per-version JSON endpoint. Markers that gate a requirement on an extra are skipped: those represent optional installs and don't constrain the base resolution.

load_installed
load_installed() -> list[InstalledPackage]

Enumerate every package in uv.lock, principals and transitives alike.

The lockfile is the source of truth for what will actually be installed. Each entry becomes an InstalledPackage with a via_chain computed by reverse-graph BFS from the direct deps declared in pyproject.toml. Direct deps get an empty via_chain (they are principals); transitives get the shortest chain of intermediates back to a principal.

Group attribution follows the same union semantic as npm: forward-walk from each principal and tag every reachable package with that principal's groups. Transitives reached through multiple principals accumulate the union, matching the runner's "included if reachable through any included group" rule.

Requires uv.lock to exist; raises EcosystemError if it's missing (run uv lock to generate one).

parse_version
parse_version(version: str) -> ParsedVersion | None

Parse a version string with PEP 440 semantics.

PEP 440 is a superset of semver for parsing purposes: anything packaging.Version accepts (including 2-segment releases like 3.12, post-releases like 1.0.post1, epochs like 1!2.0, and dev releases) becomes a usable ParsedVersion. Inputs outside that grammar return None; the cooldown engine treats None as "skip this candidate" rather than raising.

Short releases get zero-padded for the major / minor / micro view: 3.12 reports major=3, minor=12, micro=0 so the engine classifies it as a minor release. Versions with more than three release segments truncate to the first three for classification but keep the full release tuple in the sort key, so 1.2.3.4 still sorts after 1.2.3 the way packaging compares it.

The original string is preserved verbatim so safe versions round-trip back through fix actions in the exact form the registry published, even when packaging would canonicalize it differently (e.g. 2.0.0-rc1 -> 2.0.0rc1).

The sort_key wraps packaging.Version in a single-element tuple. Version already implements PEP 440 ordering directly (epochs first, then release, then pre-release, then post-release, then dev-release); the tuple wrapper exists so the ParsedVersion.sort_key contract stays uniform across ecosystems.

range_satisfies
range_satisfies(version: str, range_spec: str) -> bool

Return True if version satisfies a PEP 440 range_spec.

An empty or whitespace-only range matches any version (matches packaging's SpecifierSet("") semantics). Unparsable inputs are treated permissively to match the original script's "assume compatible" behavior for transitive deps with no discoverable range.

regenerate_lockfile
regenerate_lockfile() -> str

Recompute uv.lock by running uv lock from the project root.

remove_managed_pin
remove_managed_pin(pin: ManagedPin) -> RemovalOutcome

Reverse a previously-applied managed pin from the project's pyproject.toml.

For PinMechanism.DIRECT this removes the entry from [project.dependencies], [project.optional-dependencies], or [dependency-groups.*] (whichever holds it). For PinMechanism.OVERRIDE this removes the entry from [tool.uv.override-dependencies] at the recorded manifest path.

See Ecosystem.remove_managed_pin for outcome semantics.

workspace_topology
workspace_topology() -> WorkspaceTopology | None

Detect a uv workspace by walking up to find a pyproject.toml with [tool.uv.workspace].

Starts at self.root and walks toward the filesystem root until it finds a pyproject.toml declaring [tool.uv.workspace]. Reads members (glob patterns) and exclude, expands the globs, and returns a WorkspaceTopology keyed by each member's project.name.

Returns None when no workspace declaration is reachable from self.root.

RegistryClient

Cached, dedupe-aware wrapper around an Ecosystem's registry methods.

Result lookups (including None results for missing packages) are memoized for the lifetime of the instance. Concurrent calls for the same key share a single in-flight task so the underlying ecosystem only runs once per key no matter how many callers ask for it.

__init__
__init__(ecosystem: Ecosystem, http: AsyncClient) -> None

Bind an ecosystem to an HTTP session and start with an empty cache.

Parameters:

Name Type Description Default
ecosystem Ecosystem

The ecosystem backend whose fetch_package and fetch_version_manifest methods will be called on cache miss.

required
http AsyncClient

HTTP session passed through to the ecosystem on every miss. Owned by the caller; the client does not close it.

required
fetch_package async
fetch_package(name: str) -> PackageInfo | None

Return release info for name, going to the registry only on a cache miss.

fetch_version_manifest async
fetch_version_manifest(name: str, version: str) -> VersionManifest | None

Return the dependency declarations for (name, version), cached after the first call.

RegistryError

Bases: ChillOutError

Indicates a problem talking to a package registry.

ReleaseType

Bases: AutoNameEnum, LowerCaseMixin

Classification of a single release used to look up its cooldown threshold.

RemovalOutcome

Bases: AutoNameEnum, LowerCaseMixin

Result of attempting to remove a single managed pin from a manifest.

SafeVersion dataclass

A version older than the installed one that has cleared its cooldown window.

SkipReason dataclass

A package that the check could not evaluate, paired with the reason it was skipped.

Skips happen when the registry has no record of the package, when the registry call itself fails, or when the installed version has no recorded publish date. The reason is a human-readable explanation suitable for surfacing in CLI output.

StateError

Bases: ChillOutError

Base class for problems with chill-out's state file.

StateFileCorruptError

Bases: StateError

Raised when the state file is not valid JSON.

StateFileUnreadableError

Bases: StateError

Raised when the state file exists but cannot be read (permissions, I/O error, etc.).

StateSchemaVersionError

Bases: StateError

Raised when the state file's schema_version is not understood by this chill-out.

StateValidationError

Bases: StateError

Raised when the state file parses as JSON but doesn't conform to the wire schema.

UnfixableViolation dataclass

A violation that chill-out fix could not auto-resolve.

Surfaces the structured reason so the CLI can print actionable guidance instead of silently dropping the violation.

VersionManifest dataclass

The dependency declarations for a single (name, version) pair.

deps maps each declared dependency name to its raw range spec, in the native format of the ecosystem (e.g. "^2.0.0" for npm or ">=2.5,<3.0" for PyPI).

Violation dataclass

A package whose installed version has not cleared its cooldown window.

is_shared property
is_shared: bool

True when the underlying installation is shared across workspace members.

member_owners property
member_owners: tuple[str, ...]

Workspace members that pull this installation in (empty for non-workspace projects).

build_managed_pins

build_managed_pins(applied: AppliedFixes, violations: Iterable[Violation], config: ChillOutConfig, *, now: DateTime | None = None) -> list[ManagedPin]

Build the ManagedPin records that should be saved into state for one fix run.

Pairs every AppliedFix from applied.entries with the Violation that motivated it (matched by package name) so the resulting AvoidingRelease snapshot captures why the pin exists. Pins for which no matching violation can be found are skipped, since chill-out has no avoiding-metadata to attach.

config is consulted to derive the cooldown window for the violation's release type so the snapshot reflects the policy in force at the time the pin was written.

check

check(root: Path, *, ecosystem_kind: EcosystemKind | None = None, config: ChillOutConfig | None = None, fast: bool = False, concurrency: int = DEFAULT_CONCURRENCY) -> CheckReport

Synchronous convenience wrapper around check_async.

Auto-detects the ecosystem from root unless ecosystem_kind is given.

check_async async

check_async(ecosystem: Ecosystem, *, config: ChillOutConfig | None = None, fast: bool = False, concurrency: int = DEFAULT_CONCURRENCY, http: AsyncClient | None = None, now: DateTime | None = None, on_start: Callable[[list[InstalledPackage]], None] | None = None, on_progress: Callable[[InstalledPackage], None] | None = None) -> CheckReport

Run the full cooldown check for the given ecosystem.

Every package recorded in the project's lockfile is audited, principals and transitives alike. The lockfile is the source of truth for what the ecosystem will actually install; anything declared in the project's primary manifest but not yet locked is out of scope by design.

Parameters:

Name Type Description Default
ecosystem Ecosystem

The detected or selected ecosystem backend.

required
config ChillOutConfig | None

Cooldown configuration. If omitted, it is loaded from the ecosystem's project root.

None
fast bool

If True, skip the safe-version lookup for faster runs.

False
concurrency int

Maximum simultaneous registry requests.

DEFAULT_CONCURRENCY
http AsyncClient | None

Optional pre-configured HTTP client (mostly useful for testing).

None
now DateTime | None

Override the "now" timestamp used when comparing ages (testing).

None
on_start Callable[[list[InstalledPackage]], None] | None

Optional callback fired once with the full list of packages about to be checked. Use it to size a progress bar.

None
on_progress Callable[[InstalledPackage], None] | None

Optional callback fired once per package after it has been evaluated. Use it to advance a progress bar.

None

cleanup_managed_pins

cleanup_managed_pins(eco: Ecosystem, state: ChillOutState) -> CleanupReport

Walk every pin in state.managed_pins and try to remove it from the project's manifests.

Mutates state.managed_pins in place: every entry is dropped regardless of outcome (REMOVED, DRIFTED, and ORPHAN all leave nothing for chill-out to track going forward). The returned CleanupReport lets the caller surface drift warnings to the user without re-walking state.

The ecosystem is responsible for the per-pin manifest edit; this function does not regenerate any lockfile. The caller is expected to trigger lockfile regeneration once after cleanup so the project is in a consistent state before fresh fixes are applied.

detect_ecosystem

detect_ecosystem(root: Path) -> Ecosystem

Auto-detect which ecosystem backend applies to the given project root.

Raises:

Type Description
EcosystemError

If no backend matches, or if multiple backends match (in which case the user should pass the ecosystem explicitly).

get_ecosystem

get_ecosystem(kind: EcosystemKind, root: Path) -> Ecosystem

Instantiate a backend by kind for the given project root.

load_config

load_config(root: Path, ecosystem: EcosystemKind) -> ChillOutConfig

Resolve the effective chill-out configuration for the given project root and ecosystem.

Picks a single config primary source, in priority order: - a chill-out-native source (dedicated .chill-out.* file - a [tool.chill-out] table in pyproject.toml - a chill-out block in package.json

If no primary source is round, checks for cooldown config in dependabot.yml.

If no config is found, use defaults.

plan_fixes

plan_fixes(report: CheckReport, *, fix_style: FixStyle = FixStyle.EXACT) -> FixPlan

Build a basic fix plan from a report, without principal range checking.

Each violation with a known safe version becomes a single FixAction that pins the package directly in the project's primary manifest. Transitive violations get pinned as direct deps too, so the resolver hoists them and they win over the principal's declared range. Violations with no known safe version land in FixPlan.unfixable so the caller can report them.

The fix_style parameter controls how each pin is rendered into the manifest. See chill_out.constants.FixStyle.

For the smarter version that range-checks transitive pins against the installed principal and rolls the principal back when the declared range can't admit the safe transitive, use plan_fixes_async.

plan_fixes_async async

plan_fixes_async(report: CheckReport, ecosystem: Ecosystem, *, config: ChillOutConfig | None = None, http: AsyncClient | None = None, now: DateTime | None = None) -> FixPlan

Build a fix plan with conflict-aware principal rollback.

Every violation gets pinned as a direct dependency in the project's primary manifest. For transitive violations the runner walks every ancestor in the chain (immediate parent up through the principal) and checks whether any of them declares a range for the violating package that excludes the safe version. The flow is:

  1. Direct violation: pin the safe version. Done.
  2. Transitive violation, no ancestor range conflicts: pin the safe version directly. The resolver hoists the direct pin and every ancestor stays where it is.
  3. Transitive violation, an ancestor range conflicts: search for an older principal version (out of cooldown, non-prerelease) whose declared range does admit the safe transitive. If found, emit pins for both the principal rollback and the transitive. If no compatible older principal exists, record the violation in unfixable with a structured reason so the caller can show the user their options (downgrade the principal manually, raise the safe target, or wait out the cooldown).

The principal is the only level that always gets rolled back because it's the only ancestor declared in the project's own manifest. Rolling it back changes which intermediate versions the resolver picks, which can clear conflicts deeper in the chain.

Configuration

chill_out.config

Layered configuration loading for chill-out.

Sources are consulted in priority order, highest first:

  1. A dedicated config file at the project root, in any of .chill-out.yaml, .chill-out.yml, .chill-out.toml, or .chill-out.json. Only one such file is permitted; having more than one is a configuration error.
  2. The project's primary manifest:
  3. [tool.chill-out] in pyproject.toml (Python projects), or
  4. the top-level "chill-out" key in package.json (npm projects).
  5. The Dependabot cooldown: block for the matching ecosystem in .github/dependabot.yml (cooldown thresholds only; Dependabot has no concept of dependency-group filtering).
  6. Hard-coded defaults from chill_out.constants.

Each source can supply a partial mapping; missing keys cascade down through the remaining sources.

ChillOutConfig pydantic-model

Bases: BaseModel

Resolved chill-out configuration.

Built from a single config source by load_config. Fields the source didn't specify are filled from module-level defaults; for cooldown_days specifically, missing release types are filled per-key from DEFAULT_COOLDOWN_DAYS.

Config:

  • frozen: True
  • extra: forbid

Fields:

Validators:

  • _validate_cooldown_dayscooldown_days
  • _validate_include_groupsinclude_groups
  • _validate_fix_stylefix_style
include_group_set property
include_group_set: frozenset[DependencyGroup]

The configured include_groups as a set, for fast membership checks.

coerce_days classmethod
coerce_days(raw: Any) -> dict[ReleaseType, int]

Map a day-config dict into a typed ReleaseType -> int map.

Keys must match a ReleaseType value (major, minor, patch, default, case-insensitive). Unknown keys raise ConfigError so user typos surface immediately instead of being silently ignored. Non-integer values also raise ConfigError.

coerce_fix_style classmethod
coerce_fix_style(raw: Any, *, source: str) -> FixStyle | None

Map a raw fix-style value into a typed FixStyle.

Returns None when the value is missing so callers can distinguish "not configured" from "explicitly set". Unknown names raise ConfigError with the list of valid choices.

coerce_groups classmethod
coerce_groups(raw: Any, *, source: str) -> tuple[DependencyGroup, ...] | None

Map a list of group names into a typed tuple of DependencyGroup.

Returns None when the value is missing so callers can distinguish "not configured" from "explicitly empty". An explicit empty list is accepted and means "check nothing"; the runner will produce an empty report in that case. Unknown group names raise ConfigError.

for_release_type
for_release_type(rel_type: ReleaseType) -> int

Return the cooldown threshold (days) for the given release type.

The cooldown_days map is always fully populated thanks to per-key gap-fill in the field validator, so a direct lookup is safe for every ReleaseType member.

load_chill_out_file

load_chill_out_file(root: Path) -> ChillOutConfig | None

Load configuration from a dedicated .chill-out.* file at the project root.

Searches for .chill-out.yaml, .chill-out.yml, .chill-out.toml, and .chill-out.json. Returns None when no such file exists. If more than one is present, raises ConfigError so the user resolves the ambiguity instead of silently picking one.

load_config

load_config(root: Path, ecosystem: EcosystemKind) -> ChillOutConfig

Resolve the effective chill-out configuration for the given project root and ecosystem.

Picks a single config primary source, in priority order: - a chill-out-native source (dedicated .chill-out.* file - a [tool.chill-out] table in pyproject.toml - a chill-out block in package.json

If no primary source is round, checks for cooldown config in dependabot.yml.

If no config is found, use defaults.

load_dependabot_cooldown

load_dependabot_cooldown(root: Path, ecosystem: EcosystemKind) -> ChillOutConfig | None

Load cooldown thresholds from .github/dependabot.yml for the matching ecosystem.

Returns None when no dependabot file exists, or when no update entry matches the given ecosystem. Dependabot has no concept of dependency-group filtering or fix style, so when a match is found this loader only ever supplies cooldown thresholds; the rest of ChillOutConfig falls back to its built-in defaults.

Dependabot spells its cooldown keys semver-major-days, semver-minor-days, semver-patch-days, and default-days. They are translated here into the chill-out-native key names before handing off to ChillOutConfig.coerce_days, so the rest of the config layer never has to know dependabot's spelling.

load_package_json_block

load_package_json_block(root: Path) -> ChillOutConfig | None

Load configuration from a top-level "chill-out" key in package.json.

Returns None when package.json is missing or has no "chill-out" key. The key must contain an object with the same "cooldown", "include_groups", and "fix_style" sub-keys used by the pyproject and yaml sources, so the config shape stays identical across every source chill-out reads from:

{
  "chill-out": {
    "cooldown": {"major": 30, "minor": 14, "patch": 7, "default": 7},
    "include_groups": ["main", "dev"],
    "fix_style": "compatible"
  }
}

load_pyproject_table

load_pyproject_table(root: Path) -> ChillOutConfig | None

Load configuration from [tool.chill-out] in pyproject.toml.

Returns None when pyproject.toml is missing or has no [tool.chill-out] table.

Cooldown logic

chill_out.cooldown

Pure cooldown calculation utilities — no I/O.

These functions operate on already-fetched package data so they're trivial to test in isolation. Version parsing is handed in as a callable so the engine stays ecosystem-agnostic (npm uses semver, pypi uses PEP 440 via packaging.Version, future ecosystems can plug in whatever their registry publishes).

find_safe_principal_version

find_safe_principal_version(principal_current: str, principal_info: PackageInfo, principal_manifests: dict[str, VersionManifest], transitive_name: str, transitive_safe: SafeVersion, range_satisfies: Callable[[str, str], bool], config: ChillOutConfig, parser: VersionParser, now: DateTime | None = None) -> SafeVersion | None

Find the newest principal version older than principal_current that:

  1. Has cleared its own cooldown window.
  2. Is not yanked.
  3. Declares a range for transitive_name that is satisfied by transitive_safe.version (so the resolver picks the safe transitive).

A principal version with no recorded manifest is skipped: if we can't see its declared deps we can't be sure the rollback is safe.

Parameters:

Name Type Description Default
principal_current str

The currently installed principal version.

required
principal_info PackageInfo

Release timestamps for the principal package.

required
principal_manifests dict[str, VersionManifest]

Map of version_string -> VersionManifest for the principal's candidate versions.

required
transitive_name str

Name of the transitive dep we're trying to pin.

required
transitive_safe SafeVersion

The safe version we want to pin the transitive to.

required
range_satisfies Callable[[str, str], bool]

Ecosystem-specific range check.

required
config ChillOutConfig

Cooldown configuration.

required
parser VersionParser

Ecosystem-specific version parser.

required
now DateTime | None

Override for "now" (used in tests).

None

Returns:

Type Description
SafeVersion | None

The newest acceptable principal version, or None if no candidate works.

find_safe_version

find_safe_version(current: str, info: PackageInfo, config: ChillOutConfig, parser: VersionParser, now: DateTime | None = None) -> SafeVersion | None

Return the newest released version strictly older than current that has cleared its own cooldown window.

Pre-releases are skipped. Versions the parser can't make sense of are ignored so a single oddball release in the registry never blocks the rest of the search. Yanked releases are skipped too: chill-out is in the business of recommending versions to install, and a yanked release is one the maintainer has actively withdrawn.

is_within_cooldown

is_within_cooldown(published: DateTime, rel_type: ReleaseType, config: ChillOutConfig, now: DateTime | None = None) -> tuple[bool, int, int]

Determine whether a release is still inside its cooldown window.

Returns:

Type Description
tuple[bool, int, int]

A tuple of (violating, age_days, limit_days).

release_type

release_type(version: str, parser: VersionParser) -> ReleaseType

Classify a version string as a major / minor / patch release.

A version the parser can't make sense of falls through to ReleaseType.DEFAULT so callers always have a threshold to fall back on.

Version parsing

chill_out.ecosystems.version_parsing

Ecosystem-agnostic version parsing types for the cooldown engine.

The cooldown logic in chill_out.cooldown needs to do four things with a version string: classify it as major / minor / patch, compare it to other versions, recognize pre-releases, and round-trip back to its original string form for fix planning. None of that requires the engine to know whether the version came from semver, PEP 440, or something else, so each ecosystem provides its own parser and the engine works through the small common interface declared here.

ParsedVersion carries exactly the four pieces of information the engine needs. VersionParser is the structural type the ecosystem backends' parse_version methods conform to; pass ecosystem.parse_version wherever a VersionParser is expected.

The concrete implementations live on each ecosystem backend:

  • chill_out.ecosystems.backend.Ecosystem.parse_version — the abstract contract.
  • chill_out.ecosystems.npm.backend.NpmEcosystem.parse_version — semver via node-semver.
  • chill_out.ecosystems.pypi.backend.PypiEcosystem.parse_version — PEP 440 via packaging.

VersionParser module-attribute

VersionParser = Callable[[str], ParsedVersion | None]

A function that parses a version string, returning None for inputs the ecosystem can't make sense of. The cooldown engine treats None as "skip this candidate" rather than raising, so a single weird release never blocks the rest of the search.

ParsedVersion dataclass

Engine-friendly view of a parsed version.

Two parsed versions compare via their sort_key, so each ecosystem decides what "newer than" means for its own version flavor (semver does one thing with pre-releases, PEP 440 does another with post-releases and epochs, and so on). The original string is preserved verbatim so safe versions round-trip back into manifests and lockfiles in the form the registry actually publishes.

is_prerelease instance-attribute
is_prerelease: bool

True for any non-final release (alpha / beta / rc / dev). Pre-releases are excluded from rollback candidates because betas aren't safer than the version we're trying to replace.

major instance-attribute
major: int

First numeric segment of the release. Used by release_type.

micro instance-attribute
micro: int

Third numeric segment of the release. Called patch in semver-speak. Used by release_type.

minor instance-attribute
minor: int

Second numeric segment of the release. Used by release_type.

original instance-attribute
original: str

The version string as it came from the registry. Used verbatim by downstream callers so we never accidentally rename a version on the way back out.

sort_key class-attribute instance-attribute
sort_key: tuple[Any, ...] = field(compare=False, repr=False)

Opaque ordering tuple. Built by the ecosystem's parser; the engine only ever passes this to < / max.

Models

chill_out.models

Shared dataclasses representing packages, registry data, violations, and fix actions.

AppliedFix dataclass

A single fix that was successfully written into the project's manifest.

Pairs the original FixAction with the literal value that landed in the manifest. The pinned_spec may differ from action.version for FixStyle.COMPATIBLE (e.g. "^1.2.3" for npm or "foo>=1.0,<2.0.0" for pypi); recording the literal value is what makes drift detection on the next fix run possible. manifest_path records where the entry landed, relative to the ecosystem's project root, so the next run can revisit the exact same site.

AppliedFixes dataclass

Structured outcome of an apply_fixes or apply_override_fixes call.

entries holds one AppliedFix per action that was actually written, in the order they were applied. log is the human-readable list of changes intended for CLI output, preserving the same shape every ecosystem produced before structured outputs existed.

AuditReport dataclass

Aggregated outcome of an audit run.

entries is in the same order as the state file's managed_pins, so the report's table mirrors the user's mental model of the file. The bucket properties slice the same data three ways for the renderer.

has_actionable property
has_actionable: bool

True when any pin can be retired (stale or yanked).

AuditedPin dataclass

One managed pin paired with the freshly fetched status of the version it's avoiding.

chill-out audit builds one of these per entry in state.managed_pins, queries the registry for the avoided release's current state, and slots each pin into AuditStatus.FRESH (still in cooldown -- pin is earning its keep), AuditStatus.STALE (the avoided release has cleared its cooldown -- pin is no longer needed), AuditStatus.YANKED (the registry pulled the avoided release outright -- pin is no longer needed, with extra confidence), or AuditStatus.UNKNOWN (registry skipped the package or no longer carries the version -- surfaced so the user can decide whether to retire the pin manually).

current_age_days is the age of the avoided release at audit time. None for UNKNOWN entries where the publish date isn't available. cooldown_days is the threshold that applied when the pin was created and is replayed here for context.

detail class-attribute instance-attribute
detail: str | None = None

Human-readable extra context for UNKNOWN and YANKED entries.

For UNKNOWN, this carries the registry's skip reason. For YANKED, it carries any registry-provided yank reason if one is later plumbed through; today the field is set to None and reserved for future use.

CheckReport dataclass

Aggregated outcome of a check run.

FixAction dataclass

A single change to apply when running chill-out fix.

Both direct and transitive violations land in the same shape: a pin of package to version written to the project's primary manifest (project.dependencies for pypi, dependencies for npm). Transitive pins ride along as direct entries; the ecosystem resolver hoists them.

style controls how the new constraint is rendered into the manifest. See chill_out.constants.FixStyle for the available choices. Override-style actions (via_overrides=True) are always written as exact pins regardless of style, since the whole point of an override is to dodge a specific just-released version.

When via_overrides is True the pin should be applied via the ecosystem's "force every transitive copy" mechanism instead of a direct dependency entry. The runner sets this for shared transitive violations in workspace contexts where a member-level direct pin cannot dislodge a sibling-shared copy.

FixPlan dataclass

The result of planning fixes for a check report.

InstalledPackage dataclass

A single installed dependency that should be checked against the cooldown rules.

groups class-attribute instance-attribute
groups: tuple[DependencyGroup, ...] = ()

Semantic groups this installation belongs to.

For principals (via_chain empty), this is the set of declaration sections the package appears in (a package can be listed in more than one section, e.g. both dependencies and peerDependencies). For transitives, this is the union of the groups of every top-level dependency that pulls the install into the tree, matching the "included if reachable through any included group" semantic the runner uses to decide which packages to check.

Empty tuple means the ecosystem backend didn't attribute the install to any group (treated as "unknown" -- always included).

is_shared property
is_shared: bool

True when more than one workspace member pulls this installation in.

member_owners class-attribute instance-attribute
member_owners: tuple[str, ...] = ()

Names of workspace members whose dependency subtree includes this installation.

Empty tuple in single-project (non-workspace) mode. In a workspace, this lists every member that pulls the package in (directly or transitively). More than one entry means the version is shared across siblings -- a direct pin in any single member's manifest may not dislodge it because the others still need it.

via property
via: str | None

The principal dependency at the top of the chain, if this is a transitive dep.

via_chain class-attribute instance-attribute
via_chain: tuple[str, ...] = ()

Reverse path from this package up to the principal dependency that pulled it in.

Empty tuple means the package is a principal (declared directly in pyproject/package.json). The first element is the immediate parent, the last is the principal.

PackageInfo dataclass

All releases known for a package, keyed by version string.

published_at
published_at(version: str) -> pendulum.DateTime | None

Return the publish timestamp for the given version, if known.

PackageRelease dataclass

A single released version of a package, with its publish timestamp.

yanked reflects the registry's withdraw signal: a yanked PyPI release (every artifact marked yanked) or an npm version that's been unpublished (present in the time map but missing from versions). Yanked releases still appear in the registry response so historical resolves keep working, but chill-out treats them as unsafe upgrade targets.

SafeVersion dataclass

A version older than the installed one that has cleared its cooldown window.

SkipReason dataclass

A package that the check could not evaluate, paired with the reason it was skipped.

Skips happen when the registry has no record of the package, when the registry call itself fails, or when the installed version has no recorded publish date. The reason is a human-readable explanation suitable for surfacing in CLI output.

UnfixableViolation dataclass

A violation that chill-out fix could not auto-resolve.

Surfaces the structured reason so the CLI can print actionable guidance instead of silently dropping the violation.

VersionManifest dataclass

The dependency declarations for a single (name, version) pair.

deps maps each declared dependency name to its raw range spec, in the native format of the ecosystem (e.g. "^2.0.0" for npm or ">=2.5,<3.0" for PyPI).

Violation dataclass

A package whose installed version has not cleared its cooldown window.

is_shared property
is_shared: bool

True when the underlying installation is shared across workspace members.

member_owners property
member_owners: tuple[str, ...]

Workspace members that pull this installation in (empty for non-workspace projects).

WorkspaceTopology dataclass

Layout of a multi-member workspace.

root is the directory that owns the lockfile and is the right place to apply tree-wide overrides. members maps each member's declared package name to its directory.

Runner

chill_out.runner

Top-level orchestration for the chill-out check workflow.

Combines an Ecosystem backend with the cooldown logic to produce a CheckReport and, optionally, a list of FixAction.

CleanupReport dataclass

Outcome of cleanup_managed_pins.

Each list holds the ManagedPin records the runner attempted to remove during cleanup, grouped by the result the ecosystem returned. removed entries were successfully cleaned out of the manifest. drifted entries are still present in the manifest but their value differs from what chill-out wrote, so the ecosystem left them alone and the runner has dropped them from state. orphan entries were no longer in the manifest at all and have also been dropped from state.

audit_async async

audit_async(state: ChillOutState, ecosystem: Ecosystem, *, config: ChillOutConfig | None = None, concurrency: int = DEFAULT_CONCURRENCY, http: AsyncClient | None = None, now: DateTime | None = None) -> AuditReport

Audit every managed pin in state against the live registry.

Builds one AuditedPin per entry in state.managed_pins, preserving the state file's order so the resulting report mirrors the user's mental model of the file. The lookup is read-only; nothing on disk is touched. The caller decides what to do with the result -- typically print a summary table and exit with a status code.

Owns the httpx.AsyncClient only when one isn't supplied, mirroring check_async's ownership rules.

audit_one async

audit_one(pin: ManagedPin, client: RegistryClient, config: ChillOutConfig, semaphore: Semaphore, *, now: DateTime) -> AuditedPin

Look up the avoided release's current state for one managed pin.

The audit is a read-only lookup: it asks the registry whether the release the pin is dodging has cleared its cooldown window or been pulled outright, and slots the result into one of the four AuditStatus buckets. The current config drives the cooldown threshold so the verdict matches what chill-out fix --cleanup would do on its next run.

build_managed_pins

build_managed_pins(applied: AppliedFixes, violations: Iterable[Violation], config: ChillOutConfig, *, now: DateTime | None = None) -> list[ManagedPin]

Build the ManagedPin records that should be saved into state for one fix run.

Pairs every AppliedFix from applied.entries with the Violation that motivated it (matched by package name) so the resulting AvoidingRelease snapshot captures why the pin exists. Pins for which no matching violation can be found are skipped, since chill-out has no avoiding-metadata to attach.

config is consulted to derive the cooldown window for the violation's release type so the snapshot reflects the policy in force at the time the pin was written.

candidate_principal_versions

candidate_principal_versions(info: PackageInfo, installed_version: str, config: ChillOutConfig, parser: VersionParser, now: DateTime) -> list[str]

Pick the set of principal versions worth fetching manifests for.

Strict subset of find_safe_principal_version's candidate filter that avoids the manifest fetch (which is only needed for the ones that survive the cooldown filter).

check

check(root: Path, *, ecosystem_kind: EcosystemKind | None = None, config: ChillOutConfig | None = None, fast: bool = False, concurrency: int = DEFAULT_CONCURRENCY) -> CheckReport

Synchronous convenience wrapper around check_async.

Auto-detects the ecosystem from root unless ecosystem_kind is given.

check_async async

check_async(ecosystem: Ecosystem, *, config: ChillOutConfig | None = None, fast: bool = False, concurrency: int = DEFAULT_CONCURRENCY, http: AsyncClient | None = None, now: DateTime | None = None, on_start: Callable[[list[InstalledPackage]], None] | None = None, on_progress: Callable[[InstalledPackage], None] | None = None) -> CheckReport

Run the full cooldown check for the given ecosystem.

Every package recorded in the project's lockfile is audited, principals and transitives alike. The lockfile is the source of truth for what the ecosystem will actually install; anything declared in the project's primary manifest but not yet locked is out of scope by design.

Parameters:

Name Type Description Default
ecosystem Ecosystem

The detected or selected ecosystem backend.

required
config ChillOutConfig | None

Cooldown configuration. If omitted, it is loaded from the ecosystem's project root.

None
fast bool

If True, skip the safe-version lookup for faster runs.

False
concurrency int

Maximum simultaneous registry requests.

DEFAULT_CONCURRENCY
http AsyncClient | None

Optional pre-configured HTTP client (mostly useful for testing).

None
now DateTime | None

Override the "now" timestamp used when comparing ages (testing).

None
on_start Callable[[list[InstalledPackage]], None] | None

Optional callback fired once with the full list of packages about to be checked. Use it to size a progress bar.

None
on_progress Callable[[InstalledPackage], None] | None

Optional callback fired once per package after it has been evaluated. Use it to advance a progress bar.

None

check_one async

check_one(pkg: InstalledPackage, client: RegistryClient, config: ChillOutConfig, semaphore: Semaphore, *, fast: bool, parser: VersionParser, now: DateTime, on_complete: Callable[[InstalledPackage], None] | None = None) -> Violation | SkipReason | None

Fetch and evaluate a single package.

Returns:

Type Description
Violation | SkipReason | None

A Violation if the package is within its cooldown window,

Violation | SkipReason | None

a SkipReason if the package could not be evaluated,

Violation | SkipReason | None

or None if it has cleared cooldown.

The on_complete callback fires once the package has been evaluated, regardless of outcome. Useful for wiring up progress reporting without coupling the runner to a particular UI library.

cleanup_managed_pins

cleanup_managed_pins(eco: Ecosystem, state: ChillOutState) -> CleanupReport

Walk every pin in state.managed_pins and try to remove it from the project's manifests.

Mutates state.managed_pins in place: every entry is dropped regardless of outcome (REMOVED, DRIFTED, and ORPHAN all leave nothing for chill-out to track going forward). The returned CleanupReport lets the caller surface drift warnings to the user without re-walking state.

The ecosystem is responsible for the per-pin manifest edit; this function does not regenerate any lockfile. The caller is expected to trigger lockfile regeneration once after cleanup so the project is in a consistent state before fresh fixes are applied.

dedupe_actions

dedupe_actions(actions: Iterable[FixAction]) -> list[FixAction]

Deduplicate by package name, keeping the smallest version.

filter_by_groups

filter_by_groups(packages: list[InstalledPackage], config: ChillOutConfig) -> list[InstalledPackage]

Drop installed packages whose semantic groups don't intersect include_groups.

A package with an empty groups tuple is treated as "unknown origin" and always kept; this preserves the historical behavior for ecosystem backends or test fixtures that don't attribute groups. Packages with at least one group are kept only when at least one of their groups is in the configured set.

plan_fixes

plan_fixes(report: CheckReport, *, fix_style: FixStyle = FixStyle.EXACT) -> FixPlan

Build a basic fix plan from a report, without principal range checking.

Each violation with a known safe version becomes a single FixAction that pins the package directly in the project's primary manifest. Transitive violations get pinned as direct deps too, so the resolver hoists them and they win over the principal's declared range. Violations with no known safe version land in FixPlan.unfixable so the caller can report them.

The fix_style parameter controls how each pin is rendered into the manifest. See chill_out.constants.FixStyle.

For the smarter version that range-checks transitive pins against the installed principal and rolls the principal back when the declared range can't admit the safe transitive, use plan_fixes_async.

plan_fixes_async async

plan_fixes_async(report: CheckReport, ecosystem: Ecosystem, *, config: ChillOutConfig | None = None, http: AsyncClient | None = None, now: DateTime | None = None) -> FixPlan

Build a fix plan with conflict-aware principal rollback.

Every violation gets pinned as a direct dependency in the project's primary manifest. For transitive violations the runner walks every ancestor in the chain (immediate parent up through the principal) and checks whether any of them declares a range for the violating package that excludes the safe version. The flow is:

  1. Direct violation: pin the safe version. Done.
  2. Transitive violation, no ancestor range conflicts: pin the safe version directly. The resolver hoists the direct pin and every ancestor stays where it is.
  3. Transitive violation, an ancestor range conflicts: search for an older principal version (out of cooldown, non-prerelease) whose declared range does admit the safe transitive. If found, emit pins for both the principal rollback and the transitive. If no compatible older principal exists, record the violation in unfixable with a structured reason so the caller can show the user their options (downgrade the principal manually, raise the safe target, or wait out the cooldown).

The principal is the only level that always gets rolled back because it's the only ancestor declared in the project's own manifest. Rolling it back changes which intermediate versions the resolver picks, which can clear conflicts deeper in the chain.

Ecosystems

chill_out.ecosystems.backend

Protocol for ecosystem backends.

Ecosystem

Bases: Protocol

Pluggable backend for one package ecosystem (npm, pypi, ...).

The Protocol is structural, so a backend only has to expose the right methods to satisfy it; chill-out's own backends inherit explicitly so type checkers flag any drift at the class definition rather than at a call site. Each backend owns a root directory (the project being audited) and advertises its kind, then implements every method below. Project detection lives on a separate EcosystemDetector so the registry can ask "which ecosystem applies?" without having to construct an instance first.

Backends also speak directly to their registry: fetch_package and fetch_version_manifest are async methods that take an httpx.AsyncClient per call so the caller (typically a RegistryClient) owns the session and the cache.

apply_fixes
apply_fixes(actions: list[FixAction]) -> AppliedFixes

Apply the given fix actions to the project.

Returns an AppliedFixes carrying one AppliedFix per entry actually written to the project's manifests, plus a list of human-readable log lines describing the changes for the CLI to surface. The per-entry records capture the literal pinned_spec written to disk so the next run can detect whether the user has since edited it (drift) and clean stale entries up before planning fresh fixes.

apply_override_fixes
apply_override_fixes(actions: list[FixAction]) -> AppliedFixes | None

Apply fixes via the ecosystem's override mechanism.

Used as a fallback when a normal direct pin doesn't dislodge a violating version (typically because it stays hoisted at a parent level the direct pin can't reach). The exact mechanism varies by ecosystem (see supports_overrides for a survey); the contract here is the same regardless of which one a backend reaches for.

Returns an AppliedFixes carrying one entry per override actually written plus human-readable log lines, or None when the ecosystem doesn't support an override mechanism.

fetch_package async
fetch_package(name: str, http: AsyncClient) -> PackageInfo | None

Return all release info for name, or None if it cannot be retrieved.

Caching and in-flight dedupe live one layer up in RegistryClient; backends do neither and just translate one HTTP call into a PackageInfo.

fetch_version_manifest async
fetch_version_manifest(name: str, version: str, http: AsyncClient) -> VersionManifest | None

Return the dependency declarations for a single (name, version) pair.

Used by principal-rollback to discover which transitive ranges a candidate principal version declares. Returns None if the manifest cannot be retrieved.

load_installed
load_installed() -> list[InstalledPackage]

Enumerate every package in the project's lockfile.

Returns the full resolved dependency set, principals and transitives alike. Each InstalledPackage carries enough context (via_chain, groups, member_owners) for downstream filters and fix planning to tell them apart.

parse_version
parse_version(version: str) -> ParsedVersion | None

Parse a version string the way this ecosystem parses one.

Returns None for inputs that don't fit the ecosystem's version grammar; the cooldown engine treats that as "skip this candidate" rather than raising, so a single weird release never blocks the rest of the search. The returned ParsedVersion carries everything the engine needs (release segments, pre-release flag, and an opaque sort key) without the engine having to know which flavor of version it's looking at.

range_satisfies
range_satisfies(version: str, range_spec: str) -> bool

Return True if version satisfies the ecosystem-specific range_spec.

Used by principal-rollback to test whether a candidate principal's declared range admits the safe transitive version.

regenerate_lockfile
regenerate_lockfile() -> str

Recompute the project's lockfile from its current manifests.

Used by the fix workflow after a cleanup pass that removed stale managed pins but did not apply any fresh fixes (the apply step regenerates the lockfile on its own when it runs). Returns a short human-readable line describing the action taken so the CLI can surface it in its log output.

Implementations should raise EcosystemError if regeneration fails.

remove_managed_pin
remove_managed_pin(pin: ManagedPin) -> RemovalOutcome

Try to undo a previously-applied managed pin from this project's manifests.

Used by the fix workflow to clean stale pins before computing a new round of fixes, so cooldowns that have elapsed in the meantime do not leave their pins behind.

Implementations look up pin.package at the site recorded in pin.manifest_path (interpreted relative to self.root) using the appropriate mechanism for pin.mechanism, and:

  • Return RemovalOutcome.REMOVED if the entry is still present and matches the recorded pin.pinned_spec. The entry is deleted in place.
  • Return RemovalOutcome.DRIFTED if the entry is present but its value differs from the recorded value. Implementations leave the entry untouched; the caller is expected to drop the pin from state and warn the user.
  • Return RemovalOutcome.ORPHAN if the entry is no longer present at all. Implementations leave the manifest alone; the caller drops the pin silently.

Implementations must not run lockfile regeneration; the runner orchestrates that step once after the full batch of removals.

supports_overrides
supports_overrides() -> bool

Return True if this ecosystem implements an override mechanism.

Most package managers expose some flavor of "force one resolution everywhere regardless of who declared it" knob (npm overrides, yarn resolutions, pnpm pnpm.overrides, uv override-dependencies, cargo [patch], go replace, maven dependencyManagement, gradle resolutionStrategy.force). A handful of others, notably bundler and composer, don't, so backends for those ecosystems return False here and the runner falls back to plain direct pins.

workspace_topology
workspace_topology() -> WorkspaceTopology | None

Detect a multi-member workspace and return its layout.

Returns None for standalone (single-root) projects or when no workspace declaration is present.

chill_out.ecosystems.npm.backend

npm ecosystem backend.

Reads installed packages from npm list --json and from package-lock.json for transitive resolution. Talks to the npm registry. Applies fixes by editing the root package.json to pin every safe version (direct or promoted-from- transitive) into dependencies, then re-running npm install.

NpmEcosystem

Bases: Ecosystem

Ecosystem backend for npm projects.

apply_fixes
apply_fixes(actions: list[FixAction]) -> AppliedFixes

Apply pins. Routes via_overrides actions through apply_override_fixes.

Splits the incoming actions into two groups based on the via_overrides flag, which the planner sets for shared transitive violations in workspace contexts. Direct pins land in self.root's package.json dependencies; override pins go through the workspace-root override path. Both groups trigger their own npm install, in this order: write direct pins first, then npm install from the member, then write overrides at the workspace root, then npm install from there.

When the override path returns None (the planner tagged via_overrides but no workspace root could be located), the action falls back to a direct pin so it isn't silently lost. That fallback is genuinely defensive: with the current planner and the two shipping ecosystems, the override path always resolves when the planner asked for it.

apply_override_fixes
apply_override_fixes(actions: list[FixAction]) -> AppliedFixes | None

Force transitive versions via npm's overrides field.

Direct pins in dependencies only affect what the project's own code resolves to. When a violating version is hoisted at the workspace-root node_modules (where a different consumer in the tree pulled it in), a direct pin in a workspace-member's package.json can leave that root copy untouched. overrides is npm's blessed mechanism for forcing one resolution everywhere regardless of who declared it.

Overrides must live in the workspace root's package.json to apply tree-wide, so this writes to the directory that owns the lockfile rather than self.root (which may be a workspace member). When that root manifest doesn't exist, return None so the caller can fall back to direct pinning.

fetch_package async
fetch_package(name: str, http: AsyncClient) -> PackageInfo | None

Fetch all release timestamps for a package from the npm registry.

Returns None if the package is missing (404). Raises RegistryError on transport failures, non-2xx responses other than 404, non-JSON bodies, or any drift in the response shape that fails Pydantic validation.

fetch_version_manifest async
fetch_version_manifest(name: str, version: str, http: AsyncClient) -> VersionManifest | None

Fetch dependency declarations for {name}@{version} from the npm registry.

Returns None for 404 responses. Merges dependencies and peerDependencies into a single deps map; npm treats peer deps as runtime constraints just like regular deps for resolution purposes, so the cooldown engine should see both when checking whether a safe transitive can be hoisted.

load_installed
load_installed() -> list[InstalledPackage]

Load the full dependency tree from npm list.

The returned list contains every package npm reports as installed, principals (top-level installs) and transitives alike. Each InstalledPackage is keyed by (name, version) because npm routinely installs multiple copies of the same package at different versions in different branches of node_modules; each copy actually loads at runtime for whichever code requires it, so we report them independently.

The work happens in three phases that mirror the pypi backend's shape:

  1. Run npm list to materialize the dependency tree, optionally also at the workspace root for cross-member ownership attribution.
  2. Read the project's own package.json to learn which top-level names belong to which semantic group.
  3. Walk the tree once per top-level entry to attribute every reachable (name, version) to its principal's groups, then a second walk to assemble the actual InstalledPackage records with their via_chain and ownership metadata.
parse_version
parse_version(version: str) -> ParsedVersion | None

Parse a version string with strict semver semantics.

npm publishes its registry data in semver form (MAJOR.MINOR.PATCH with optional -prerelease and +build), so anything that doesn't fit that grammar gets None. The cooldown engine treats None as "skip this candidate" rather than raising, so a non-semver oddity like a date-tagged version doesn't block the rest of the search.

The returned ParsedVersion carries the original string verbatim so safe versions round-trip back through fix actions in the exact form the registry published.

The sort key wraps the parsed semver.Version itself in a single-element tuple. semver.Version already compares the way npm expects (pre-releases sort before their final version, build metadata is ignored), so we don't need anything custom here.

range_satisfies
range_satisfies(version: str, range_spec: str) -> bool

Check whether version satisfies an npm semver range_spec.

Shells out to node -e "require('semver').satisfies(...)". If node or the semver package isn't available, conservatively returns True (the original script's "assume compatible" fallback for transitive deps with no discoverable range).

regenerate_lockfile
regenerate_lockfile() -> str

Recompute package-lock.json by running npm install from the project root.

remove_managed_pin
remove_managed_pin(pin: ManagedPin) -> RemovalOutcome

Reverse a previously-applied managed pin from the project's package.json.

For PinMechanism.DIRECT this removes the entry from dependencies (and the parallel devDependencies, optionalDependencies, peerDependencies blocks if the pin landed there). For PinMechanism.OVERRIDE this removes the entry from overrides at the recorded manifest path.

See Ecosystem.remove_managed_pin for outcome semantics.

workspace_topology
workspace_topology() -> WorkspaceTopology | None

Detect an npm workspace by reading the lockfile-rooted package.json.

Walks up to find the workspace root (the directory that owns the lockfile, which may be self.root itself or an ancestor for a member project). If the root's package.json declares a workspaces field, expand the globs against the root directory and read each member's name from its own package.json.

Returns None when there's no lockfile, no workspaces field, or none of the globs resolve to a directory with a readable package.json.

The work is split into _locate_workspace_root, which finds the directory that owns the lockfile and reads its package.json, and _discover_workspace_members, which expands the glob patterns and assembles the name -> directory map. Both helpers are static so they can be exercised independently of a live NpmEcosystem instance.

chill_out.ecosystems.pypi.backend

PyPI ecosystem backend.

Reads installed packages from uv.lock; the lockfile is required and the backend raises EcosystemError if it's missing. pyproject.toml is consulted only to tell principals (direct deps) apart from transitives and to attribute each package to its dependency groups. Talks to the PyPI JSON API for release timestamps. Applies fixes by editing pyproject.toml to pin versions and re-running uv lock.

PypiEcosystem

Bases: Ecosystem

Ecosystem backend for Python projects using uv + pyproject.toml.

apply_fixes
apply_fixes(actions: list[FixAction]) -> AppliedFixes

Apply pins. Routes via_overrides actions through apply_override_fixes.

Direct pins are written into self.root's pyproject.toml and validated with uv lock. Override pins go through the workspace root's [tool.uv].override-dependencies field and trigger a workspace-wide uv lock to recompute the resolution.

apply_override_fixes
apply_override_fixes(actions: list[FixAction]) -> AppliedFixes | None

Force transitive versions via uv's override-dependencies mechanism.

Writes one entry per action to [tool.uv].override-dependencies in the workspace root's pyproject.toml (or self.root when there's no workspace), then runs uv lock from that directory to recompute the workspace-wide resolution. Returns an AppliedFixes on success, or None when no usable workspace root could be located.

fetch_package async
fetch_package(name: str, http: AsyncClient) -> PackageInfo | None

Fetch all releases and their upload timestamps for a package from PyPI.

The PyPI JSON API returns one entry per uploaded artifact; we take the earliest upload time for each version as its publish date. Releases with no surviving uploads (empty artifact list) are dropped from the result. Schema validation guarantees every artifact carries at least one usable timestamp, so any release with at least one artifact will produce a PackageRelease.

fetch_version_manifest async
fetch_version_manifest(name: str, version: str, http: AsyncClient) -> VersionManifest | None

Fetch the dependency declarations for a single PyPI release.

Pulls info.requires_dist from the per-version JSON endpoint. Markers that gate a requirement on an extra are skipped: those represent optional installs and don't constrain the base resolution.

load_installed
load_installed() -> list[InstalledPackage]

Enumerate every package in uv.lock, principals and transitives alike.

The lockfile is the source of truth for what will actually be installed. Each entry becomes an InstalledPackage with a via_chain computed by reverse-graph BFS from the direct deps declared in pyproject.toml. Direct deps get an empty via_chain (they are principals); transitives get the shortest chain of intermediates back to a principal.

Group attribution follows the same union semantic as npm: forward-walk from each principal and tag every reachable package with that principal's groups. Transitives reached through multiple principals accumulate the union, matching the runner's "included if reachable through any included group" rule.

Requires uv.lock to exist; raises EcosystemError if it's missing (run uv lock to generate one).

parse_version
parse_version(version: str) -> ParsedVersion | None

Parse a version string with PEP 440 semantics.

PEP 440 is a superset of semver for parsing purposes: anything packaging.Version accepts (including 2-segment releases like 3.12, post-releases like 1.0.post1, epochs like 1!2.0, and dev releases) becomes a usable ParsedVersion. Inputs outside that grammar return None; the cooldown engine treats None as "skip this candidate" rather than raising.

Short releases get zero-padded for the major / minor / micro view: 3.12 reports major=3, minor=12, micro=0 so the engine classifies it as a minor release. Versions with more than three release segments truncate to the first three for classification but keep the full release tuple in the sort key, so 1.2.3.4 still sorts after 1.2.3 the way packaging compares it.

The original string is preserved verbatim so safe versions round-trip back through fix actions in the exact form the registry published, even when packaging would canonicalize it differently (e.g. 2.0.0-rc1 -> 2.0.0rc1).

The sort_key wraps packaging.Version in a single-element tuple. Version already implements PEP 440 ordering directly (epochs first, then release, then pre-release, then post-release, then dev-release); the tuple wrapper exists so the ParsedVersion.sort_key contract stays uniform across ecosystems.

range_satisfies
range_satisfies(version: str, range_spec: str) -> bool

Return True if version satisfies a PEP 440 range_spec.

An empty or whitespace-only range matches any version (matches packaging's SpecifierSet("") semantics). Unparsable inputs are treated permissively to match the original script's "assume compatible" behavior for transitive deps with no discoverable range.

regenerate_lockfile
regenerate_lockfile() -> str

Recompute uv.lock by running uv lock from the project root.

remove_managed_pin
remove_managed_pin(pin: ManagedPin) -> RemovalOutcome

Reverse a previously-applied managed pin from the project's pyproject.toml.

For PinMechanism.DIRECT this removes the entry from [project.dependencies], [project.optional-dependencies], or [dependency-groups.*] (whichever holds it). For PinMechanism.OVERRIDE this removes the entry from [tool.uv.override-dependencies] at the recorded manifest path.

See Ecosystem.remove_managed_pin for outcome semantics.

workspace_topology
workspace_topology() -> WorkspaceTopology | None

Detect a uv workspace by walking up to find a pyproject.toml with [tool.uv.workspace].

Starts at self.root and walks toward the filesystem root until it finds a pyproject.toml declaring [tool.uv.workspace]. Reads members (glob patterns) and exclude, expands the globs, and returns a WorkspaceTopology keyed by each member's project.name.

Returns None when no workspace declaration is reachable from self.root.

chill_out.ecosystems.registry

Lookup helpers that select the right ecosystem for a project root.

detect_ecosystem

detect_ecosystem(root: Path) -> Ecosystem

Auto-detect which ecosystem backend applies to the given project root.

Raises:

Type Description
EcosystemError

If no backend matches, or if multiple backends match (in which case the user should pass the ecosystem explicitly).

get_ecosystem

get_ecosystem(kind: EcosystemKind, root: Path) -> Ecosystem

Instantiate a backend by kind for the given project root.

Exceptions and constants

chill_out.exceptions

Provide exception types for chill-out.

All exception types derived from ChillOutError will, by default, be handled by the @handle_errors decorator on CLI commands.

ChillOutError

Bases: Buzz

Base exception class for all chill-out errors.

exit_code class-attribute instance-attribute
exit_code: ExitCode = GENERAL_ERROR

Exit code used when the error reaches the CLI handler.

subject class-attribute instance-attribute
subject: str | None = None

Subject shown in the user-facing error message.

ConfigError

Bases: ChillOutError

Indicates a problem reading or parsing chill-out configuration.

CooldownViolation

Bases: ChillOutError

Raised at the end of a check run when one or more cooldown violations are found.

EcosystemError

Bases: ChillOutError

Indicates a problem detecting or operating on a project ecosystem.

RegistryError

Bases: ChillOutError

Indicates a problem talking to a package registry.

handle_errors

handle_errors(message: str) -> Callable[[F], F]

Decorate a CLI command to catch errors and exit with a friendly message.

Parameters:

Name Type Description Default
message str

Prefix shown before the underlying error text.

required

Returns:

Type Description
Callable[[F], F]

A decorator that wraps the command function.

chill_out.constants

Constants and enums shared across chill-out.

AuditStatus

Bases: AutoNameEnum, LowerCaseMixin

Outcome bucket assigned to each managed pin during a chill-out audit run.

DependencyGroup

Bases: AutoNameEnum, LowerCaseMixin

Semantic dependency group names used uniformly across ecosystems.

EcosystemKind

Bases: AutoNameEnum, LowerCaseMixin

Supported package ecosystems.

ExitCode

Bases: IntEnum

Exit codes returned by the CLI.

FixStyle

Bases: AutoNameEnum, LowerCaseMixin

How chill-out fix writes the new version constraint into the project manifest.

Override-style fixes (via_overrides=True) always pin exactly regardless of this setting; the entire reason an override exists is to pin away from a specific version that just landed.

ReleaseType

Bases: AutoNameEnum, LowerCaseMixin

Classification of a single release used to look up its cooldown threshold.

State

chill_out.state.models

In-memory dataclasses for chill-out's persistent state.

These are the public, Pythonic surface of the state package. The runner builds and mutates them; the CLI inspects them. JSON serialization happens through state.schema's Pydantic models, which load() and save() invoke under the hood.

ChillOutState is intentionally mutable so the runner can append entries as fixes are applied and reset the list after cleanup. The ManagedPin and AvoidingRelease entries inside are frozen dataclasses: once a pin is recorded, its identity is locked.

AvoidingRelease dataclass

Snapshot of the release that triggered a pin, captured for explainability.

Stored alongside each ManagedPin so future readers can see why the pin exists without re-running a check. None of these fields are consulted on cleanup; they are pure metadata.

ChillOutState dataclass

Aggregate of every pin chill-out is currently managing for one project.

Loaded from .chill-out-state.json at the start of a fix run, replaced wholesale at the end. The dataclass itself is mutable so the runner can append entries as they are applied; the ManagedPin entries inside are frozen.

delete
delete(root: Path) -> None

Remove the state file from disk if it exists.

Used when a fix run produces no managed pins, so we do not leave behind an empty file that suggests we are still tracking something.

empty classmethod
empty() -> ChillOutState

Return a fresh, empty state with last_run_at set to now.

load classmethod
load(root: Path) -> ChillOutState

Read the state file at root / STATE_FILENAME.

Returns an empty state when the file is simply absent (the common, expected case for a first run). Every other failure mode halts: chill-out's bookkeeping is too important to silently discard. The file is chill-out's own output, so any read failure points at a bug, a partial write, a permissions problem, or a version mismatch — none of which should be papered over.

Raises:

Type Description
StateFileUnreadableError

The file exists but cannot be read (permissions, I/O, file vanished between is_file() and read_text()).

StateFileCorruptError

The file is not valid JSON.

StateSchemaVersionError

The file's schema_version is missing or unknown to this chill-out (older binary against newer file, hand-edit, etc.).

StateValidationError

The file parses as JSON and carries a known schema_version, but one or more fields are missing, mistyped, or carry unexpected extra keys.

save
save(root: Path) -> None

Write the current state to root / STATE_FILENAME.

The output is pretty-printed JSON with a trailing newline so it diffs cleanly under version control. Datetimes are rendered as RFC 3339 / ISO 8601 strings via the Pydantic field serializers in state.schema.

ManagedPin dataclass

A single pin or override that chill-out wrote into the project.

manifest_path is recorded relative to the project root so the state file stays portable across checkouts. pinned_spec is the literal string chill-out wrote into the manifest at the entry's value position (e.g. "lodash==4.17.20" or "^4.17.20"). On cleanup the value currently at the site is compared against this one to detect drift.

chill_out.state.schema

Pydantic wire-format schemas for chill-out's state file.

These models describe .chill-out-state.json exactly as it sits on disk. They are an implementation detail of the state package: external callers work with the dataclasses in state.models, not these classes. The dataclass surface stays Pythonic and mutable where the runner needs it, while these models do all the validation, type-coercion, and serialization work at the JSON boundary.

Each wire model carries its own translation pair:

  • from_state(...) is a classmethod that builds the wire model from its dataclass twin.
  • to_state() is an instance method that returns the dataclass twin from a validated model.

save() calls StateV1.from_state(state).model_dump_json(indent=2). load() calls StateV1.model_validate_json(text).to_state().

Datetime fields are typed as plain datetime.datetime so Pydantic's native ISO-8601 parsing and emission do all the work. The pendulum.DateTime flavor lives only in the dataclass layer where the rest of the codebase consumes it; conversion happens in to_state() via pendulum.instance.

Schema versioning lives in schema_version: Literal[1]. When a v2 ever arrives, this module gains a StateV2 model and the public type becomes a discriminated union Annotated[StateV1 | StateV2, Field(discriminator="schema_version")].

Validation failures surface as Pydantic ValidationError; the load path wraps them in StateValidationError so callers see a single typed exception.

AvoidingReleaseV1 pydantic-model

Bases: BaseModel

Wire-format twin of AvoidingRelease.

Config:

  • frozen: True
  • extra: forbid

Fields:

  • version (str)
  • release_type (ReleaseType)
  • published_at (datetime)
  • cooldown_days (int)
from_state classmethod
from_state(avoiding: AvoidingRelease) -> Self

Build a wire model from its dataclass twin.

to_state
to_state() -> AvoidingRelease

Translate this validated wire model back into its dataclass twin.

ManagedPinV1 pydantic-model

Bases: BaseModel

Wire-format twin of ManagedPin.

Config:

  • frozen: True
  • extra: forbid

Fields:

from_state classmethod
from_state(pin: ManagedPin) -> Self

Build a wire model from its dataclass twin.

to_state
to_state() -> ManagedPin

Translate this validated wire model back into its dataclass twin.

StateV1 pydantic-model

Bases: BaseModel

Wire-format root of .chill-out-state.json for schema version 1.

Config:

  • frozen: True
  • extra: forbid

Fields:

from_state classmethod
from_state(state: ChillOutState) -> Self

Build a wire model from the in-memory dataclass for save().

to_state
to_state() -> ChillOutState

Translate this validated wire model into the in-memory dataclass for load().

chill_out.state.constants

Constants and enums for chill-out's persistent state file.

The state file is chill-out's bookkeeping at .chill-out-state.json in the project root. Every field that downstream code looks up (filename, schema version, pin mechanism, removal outcome) lives here so the rest of the state package can stay focused on data shapes and validation.

CURRENT_SCHEMA_VERSION module-attribute

CURRENT_SCHEMA_VERSION = 1

The schema version this version of chill-out writes.

STATE_FILENAME module-attribute

STATE_FILENAME = '.chill-out-state.json'

Name of the state file at the project root.

PinMechanism

Bases: AutoNameEnum, LowerCaseMixin

How a managed pin is realized in the project's manifests.

RemovalOutcome

Bases: AutoNameEnum, LowerCaseMixin

Result of attempting to remove a single managed pin from a manifest.

chill_out.state.exceptions

Exceptions raised when reading or validating chill-out's state file.

The state file is chill-out's own bookkeeping (a record of which pins it has written into the project's manifests). When something goes wrong reading it, chill-out halts rather than silently proceeding: a corrupt or unreadable state file means chill-out cannot tell which pins it owns, and "treating as empty" would orphan those pins in the manifest forever.

Each subclass narrows that failure to a single mode so the CLI and tests can tell them apart without string-matching error messages.

StateError

Bases: ChillOutError

Base class for problems with chill-out's state file.

StateFileCorruptError

Bases: StateError

Raised when the state file is not valid JSON.

StateFileUnreadableError

Bases: StateError

Raised when the state file exists but cannot be read (permissions, I/O error, etc.).

StateSchemaVersionError

Bases: StateError

Raised when the state file's schema_version is not understood by this chill-out.

StateValidationError

Bases: StateError

Raised when the state file parses as JSON but doesn't conform to the wire schema.

Rendering

chill_out.render

Rich-based reporting for cooldown check results.

Every render_* function in this module returns a RenderableType rather than printing directly. The CLI does the actual console.print(...) calls. Keeping rendering pure makes each helper independently composable and trivially testable: capture the renderable, print it into a Console(file=StringIO()), inspect the bytes.

format_groups

format_groups(groups: tuple[DependencyGroup, ...]) -> str

Render a compact [group, group] suffix, or empty when nothing to show.

The leading space is included so callers can always concatenate the return value without conditionally inserting separators.

format_limit_node

format_limit_node(rel_type: ReleaseType, limit_days: int) -> str

Render a limit-column label: release type plus the threshold in days.

format_package_node

format_package_node(name: str, version: str | None, age_days: int | None = None) -> str

Render a package label for a non-violating tree node.

Shows name = version with an optional age suffix when known. Used for principals and intermediates that aren't themselves violating; the report doesn't carry their ages, so the suffix is normally omitted for those.

format_pin

format_pin(name: str, version: str, age_days: int | None) -> str

Render a 'pin this package to this version' label for the strategy tree.

format_violating_package_node

format_violating_package_node(violation: Violation) -> str

Render the package label for the violating leaf row.

Uses the release-type color on the version and calls out the age vs limit in red so the violation reads at a glance. Includes the package's group membership when known so the reader can spot at a glance whether the violation is in a main, dev, or optional dependency.

render_audit_report

render_audit_report(report: AuditReport) -> RenderableType

Render the result of an audit run.

Empty state files are reported as a single success line. Otherwise returns a Group with a headline summarizing the bucket counts plus one table per non-empty bucket: stale and yanked first (these are the actionable buckets), fresh next (informational), unknown last (the user must decide). Each table is color-coded by status so the actionable rows pull the eye.

render_fix_style

render_fix_style(config: ChillOutConfig) -> RenderableType

Render the configured fix_style as a single-line label.

render_include_groups

render_include_groups(config: ChillOutConfig) -> RenderableType

Render the configured include_groups as a single-line label.

Empty configurations are rendered explicitly so it's obvious that nothing will be checked, rather than the line being silently dropped.

render_limit_tree

render_limit_tree(violation: Violation, installed_index: dict[str, InstalledPackage], config: ChillOutConfig, parser: VersionParser) -> Tree

Render a parallel tree for the limit column, mirroring the package tree.

Each node shows the release type and threshold of the corresponding package in the chain. The leaf shows the violation's own release type and limit; non-violating ancestors get their values from release_type(version) and the cooldown config.

render_package_tree

render_package_tree(violation: Violation, installed_index: dict[str, InstalledPackage]) -> Tree

Render the dependency chain that pulled the violating package in.

Principal at the root, intermediates as children, the violating package itself as the leaf with its age vs limit called out.

render_report

render_report(report: CheckReport, *, config: ChillOutConfig, parser: VersionParser, fast: bool = False) -> RenderableType

Render a summary of the report.

When there are no violations, returns a single success line (plus a skipped-count tail when relevant). Otherwise returns a Group containing a headline, a violations table, and an optional skipped-count footer. The table has three columns:

  • Package: name = version (<n>d old) for principals, or a tree from principal down to the violating leaf for transitives.
  • Limit: a parallel tree showing each chain member's release type and threshold, so the reader can see why the leaf tripped.
  • Strategy: the explicit fix recipe (omitted under --fast).

render_strategy

render_strategy(violation: Violation) -> RenderableType

Render the recommended fix recipe for a violation.

For a principal violation, the strategy is a single pin of the violating package itself. For a transitive, it's a tree showing the chain from the principal down to the transitive, with the leaf labelled as the explicit pin to apply. When no safe version is known, returns a dim 'none' marker so the column still has something to show.

When the violation is shared across multiple workspace members, the strategy includes an annotation listing the members. That signals to the user (and to the fix planner) that a member-level pin will leave the sibling-shared copy in place and an override is the right move.

This is a display-only summary. The actual fix may also need to roll back the principal when the safe transitive version conflicts with whatever the principal's range admits; that decision lives in the planner and surfaces through plan_fixes_async, not here.

render_thresholds

render_thresholds(config: ChillOutConfig) -> RenderableType

Render a small table of the active cooldown thresholds.