Documentation

Skills reference.

Seven composable skills covering the full deck lifecycle. Each one is a self-contained workflow the agent loads on demand.

Skills are Markdown files that get loaded into the agent's context when invoked. They tell the agent what to do, in what order, what to check, and what mistakes to avoid. Each skill calls the CLI under the hood; it never touches python-pptx directly.

Skills are harness-agnostic. They work with Claude Code, but the Markdown format is portable to any agent framework that supports file-based instructions.

/slides-extract

Reads a PowerPoint template and produces machine-readable contracts for downstream skills.

When to use: Before the first build for a given template, or when the template changes.

Input: A .pptx file (corporate template, client file, or bundled template).

What it does:

  1. Runs slides extract to analyze layouts, placeholders, color zones, and theme palette
  2. Answers a comprehension gate: the agent verifies it understood the accent colors, available archetypes, split-panel layouts, and text colors
  3. Writes a design-profile.json pinning font sizes, allowed colors, template path, and catalog paths

Output:

FilePurpose
resolved_manifest.jsonPrimary contract: archetypes, layouts, geometry, color zones
base_template.pptxClean template with zero content slides
design-profile.jsonDesign constraints for lint and QA
template_layout.jsonPhysical layout families
archetypes.jsonAvailable archetypes with constraints
icons/Vector icons extracted from template slides

/slides-build

Generates a complete deck from a user brief and extracted contracts.

When to use: User wants a new presentation. Requires extraction artifacts to exist.

Input: A brief (audience, objective, recommendation, scope, slide count) plus all extraction artifacts.

What it does:

  1. Captures the brief and locks font sizes from the design profile
  2. Reads resolved_manifest.json and answers a comprehension gate (which layout per archetype, split-panel regions, primary color)
  3. Generates slides.json with a DeckPlan (narrative arc) and OperationBatch (rendering instructions)
  4. Conditionally loads reference files: chart-patterns.md only if the plan includes charts, framework-patterns.md only if it includes frameworks, etc.
  5. Dry-runs the operations to catch errors
  6. Renders to output.pptx
  7. Runs QA and reviews against 25 common mistakes

Output: slides.json, output.pptx, qa.json

Key rules the skill enforces:

  • N slides in the brief = N content slides + title + disclaimer + end slide
  • Action titles must be complete sentences, not topic labels
  • Every data slide needs a source line (9pt, bottom of slide)
  • No white text on light backgrounds
  • Split-panel layouts: title set to " " via set_semantic_text, actual title placed via add_text on the right panel
  • Speaker notes on exec summary, recommendation, and data-heavy slides

/slides-edit

Makes targeted changes to an existing deck without a full rebuild.

When to use: Fix a typo, update numbers, swap a layout, add or remove slides.

Input: An existing output.pptx, plus optionally slides.json and design-profile.json for context.

What it does:

  1. Inspects the deck to locate targets (by slide UID, shape UID, or text search)
  2. Assesses the scope of the change and risks
  3. Applies changes via one of three methods:
    • Text edits: slides edit --query "old" --replacement "new"
    • Archetype transforms: slides transform --slide-uid <uid> --to timeline
    • Ops patches: slides apply --ops-json @patch.json
  4. Verifies the change landed correctly
  5. Re-runs QA to catch regressions

Output: Modified output.pptx

/slides-audit

Technical quality check. Finds font size violations, shape overlaps, contrast issues, missing sources, and layout compliance problems.

When to use: After a build to catch visual issues, before sharing externally, or when a deck "looks off."

Input: output.pptx + design-profile.json

What it does:

  1. Runs four checks in parallel: lint, inspect, validate, and qa
  2. Categorizes findings as auto-fixable (out of bounds, overlap) or judgment calls (font size, color choice)
  3. Checks contrast (white on light, dark on dark) and overlap between shapes
  4. Generates an ops patch for auto-fixable issues
  5. Applies the patch and re-runs QA
  6. Reports what was fixed and what needs human review

Output: lint.json, qa.json, audit-fixes.json, audit report

/slides-critique

Storytelling and content review. Not technical lint; this checks whether the deck tells a convincing story.

When to use: After a build when the deck is technically correct but the narrative feels weak.

Input: output.pptx + slides.json + design-profile.json

What it does:

  1. Reads all slide content and the original plan
  2. Evaluates against seven criteria:
    • Action titles: complete sentences with a "so what"?
    • Narrative flow: clear structure (SCQA, Pyramid, WWWH)?
    • Isomorphism: visual archetype matches content type?
    • Visual hierarchy: 3+ text sizes, visual elements beyond text?
    • Content density: understandable in 5 seconds?
    • Layout variety: no 3+ same layouts in a row?
    • Parallel structure: consistent grammar and length?
  3. Identifies what's working (3-5 positive findings first)
  4. Fixes what it can via text edits and ops patches
  5. Reports structural issues it can't auto-fix (wrong archetype, needs slide split/merge)
  6. Presents a scorecard across all seven criteria

Output: critique-fixes.json, scorecard, structural recommendations

/slides-polish

Final pass before shipping. Ensures speaker notes, metadata, sources, and formatting consistency.

When to use: As the last step, after audit and critique are clean.

Input: output.pptx + design-profile.json

What it does:

  1. Completeness check: title slide, disclaimer, end slide, section dividers (if >8 content slides), document metadata (title, author, subject)
  2. Speaker notes: adds context notes to exec summary, recommendation, and data-heavy slides
  3. Source lines: verifies every chart/stats slide has a source at the bottom
  4. Consistency check: body font size uniform across all content slides, max 4 distinct sizes, accent colors all use the same primary hex, spacing consistent
  5. Applies fixes and runs a final QA gate

Output: Final output.pptx, polish-fixes.json, polish report

/slides-full

End-to-end pipeline. Chains extract, build, audit, critique, and polish into a single invocation with a state-machine orchestrator.

When to use: User provides a brief and a template and wants a finished, polished result.

Input: A .pptx template + a brief

What it does:

  1. EXTRACT_OR_REUSE — checks if contracts already exist on disk. If yes, skips extraction.
  2. BUILD_OR_UPDATE — generates slides.json, dry-runs, renders to output.pptx.
  3. GLOBAL_CONTENT_CHECK — runs plan-inspect for storytelling quality. Checks flow, action titles, duplication, role sequencing.
  4. LOCAL_VISUAL_CHECK — runs lint + QA for technical quality. Font sizes, overlaps, contrast, bounds.
  5. APPLY_FIXES — generates small reversible ops patches, applies them.
  6. RECHECK — re-runs the failed gate. Max 3 retries for content, 2 per slide for visual. Stops early if no improvement.
  7. DONE — reports iteration counts, before/after issue counts, remaining warnings, final artifacts.

Output: output.pptx, qa.json, lint.json, plan_content.json, final report

If qa.ok == false with contract or data errors, the orchestrator blocks release. Visual and story warnings are reported but don't block if the retry budget is exhausted.