Skip to content

Why Commands?

What Hyprlayer commands and agents do that AI tools don't do by default

Claude Code, GitHub Copilot, and OpenCode are powerful on their own. They can read your codebase, edit files, run commands, and even spawn sub-agents. So what are Hyprlayer’s commands and agents actually adding?

This page breaks down the specific gaps in the default experience and how each command addresses them.

It’s worth acknowledging what works out of the box:

  • Claude Code has built-in sub-agents (Explore, Plan, general-purpose), auto-memory across sessions (MEMORY.md), and can run git commands, create commits, and open PRs.
  • GitHub Copilot has agent, plan, and ask modes, session memory, and cloud agents that can create PRs.
  • OpenCode provides multi-provider model access with similar agentic capabilities.

All three tools can read your code, make multi-file edits, run tests, and iterate on failures. If your task is simple, you don’t need commands at all — just ask.

The problems show up when the work is non-trivial.

The default behavior: When you ask any of these tools to “build feature X,” they start writing code immediately. They might do some reading first, but research, planning, and implementation all happen in one unstructured pass. There’s no checkpoint where you review a plan before code gets written.

What commands do differently:

  • /research_codebase is constrained to documentation only. It describes what exists, where it exists, and how it works. It will not suggest improvements, critique the code, or propose changes unless you explicitly ask. This is not how any base tool behaves — by default, models editorialize.
  • /create_plan produces a plan only — phased steps, specific file changes, success criteria, and checkboxes. No code is written. The agent is instructed to be skeptical: it challenges your assumptions and identifies potential issues rather than agreeing with everything. Again, this is the opposite of default model behavior, which tends toward accommodation.
  • /implement_plan takes a plan file path and follows the spec. It reads the plan, reads all referenced files, implements phase by phase, verifies success criteria after each phase, and checks off completed sections in the plan file. It will communicate deviations from the plan rather than silently diverging.

None of the base tools enforce this separation. You can ask them to plan first, but there’s no mechanism preventing the model from jumping ahead, and no artifact that persists between phases.

The default behavior: Claude Code has CLAUDE.md for project instructions and auto-memory for things like “this project uses pnpm.” GitHub Copilot has session memory that clears when the conversation ends, plus repository-level custom instructions. Neither provides a structured, team-shared knowledge system.

What commands do differently:

The thoughts directory is a git-backed repository of research, plans, PR descriptions, and handoff documents — organized by project, shared across team members, and searchable by AI agents.

  • /research_codebase writes findings to thoughts/shared/ or thoughts/<username>/. Next session, that research is still there. A teammate’s session can read it too.
  • /create_plan saves plans to thoughts/shared/plans/YYYY-MM-DD-description.md. Plans can be reviewed asynchronously, iterated on with /iterate_plan, and referenced months later.
  • /describe_pr reads a PR template from thoughts/shared/pr_description.md and saves the generated description to thoughts/shared/prs/. Every PR follows the same structure.
  • /create_handoff compacts the current session’s context into a handoff document at thoughts/shared/handoffs/. A new session picks it up with /resume_handoff and continues where you left off.

Auto-sync via post-commit hooks keeps this repository up to date. The searchable index (thoughts/searchable/) gives agents flat access to everything.

This is fundamentally different from CLAUDE.md or session memory. Those are per-developer, unstructured, and not designed for team knowledge. The thoughts directory is a shared, versioned, git-native knowledge base that commands read from and write to automatically.

The default behavior: Claude Code’s built-in sub-agents are generic. The Explore agent is a fast, read-only codebase searcher. The Plan agent does read-only research. They’re useful, but they don’t have domain-specific roles.

What Hyprlayer’s agents do differently:

Hyprlayer installs eight specialized agents with distinct, narrow purposes:

  • codebase-locator finds files and components relevant to a task — like a “super grep” that understands what you’re looking for semantically, not just by string matching.
  • codebase-analyzer deep-dives into how specific code works. It’s not just finding files; it’s understanding implementations.
  • codebase-pattern-finder finds similar implementations and existing patterns in your codebase. When you need to add a new API endpoint, it finds how existing endpoints are structured and returns concrete code examples.
  • thoughts-locator and thoughts-analyzer search and analyze the thoughts directory — finding relevant prior research, existing plans, and historical context that the base tools don’t know exists.
  • web-search-researcher searches the web for API documentation, library usage patterns, and information not in the codebase.
  • jira-ticket-reader and jira-searcher integrate with JIRA to pull ticket details, find related issues, and provide project management context.

When /research_codebase runs, it doesn’t just ask the model to look around. It spawns codebase-locator, codebase-analyzer, and codebase-pattern-finder in parallel, each exploring different aspects of the codebase simultaneously. The results are synthesized into a single document.

The base tools don’t have agents that know about your thoughts directory, your JIRA instance, or your codebase’s specific patterns. Hyprlayer’s agents do.

The default behavior: You’re on whatever model your session started with. Claude Code uses the model configured in your settings. GitHub Copilot uses whatever model you selected. Switching requires manual intervention.

What commands do differently:

Each command specifies its own model:

  • Opus for /research_codebase, /create_plan, and /iterate_plan — tasks where deep reasoning and thoroughness justify the cost and speed tradeoff.
  • Sonnet for /implement_plan, /validate_plan, /commit, /describe_pr, and all other commands — tasks where speed matters and the reasoning demands are lower.

This isn’t just about cost optimization. Opus and Sonnet have genuinely different strengths. Using Opus for research and planning produces more thorough analysis. Using Sonnet for implementation produces faster iteration cycles. Commands make this selection automatically — you don’t have to think about which model fits the task.

Behavioral Constraints You’d Forget to Specify

Section titled “Behavioral Constraints You’d Forget to Specify”

The default behavior: Models are helpful and accommodating. They agree with your approach, add “Co-Authored-By” lines to commits, suggest improvements when you asked for documentation, and implement things their own way when a plan exists.

What commands encode:

  • /research_codebasedescribe, don’t prescribe. Document what exists without suggesting improvements. This is explicitly counter to the model’s natural tendency.
  • /create_planbe skeptical. Challenge assumptions, identify potential issues, push back on the user’s approach when warranted. Models default to agreeableness; this command overrides that.
  • /commitno AI attribution. No “Co-Authored-By,” no “Generated with,” no “AI-assisted.” Imperative mood. Focus on “why” not “what.” Always ask before committing. Never use git add -A or git add . — always specific file paths.
  • /implement_planfollow the plan. Read all referenced files. Verify success criteria after each phase. Update checkboxes in the plan file. Communicate deviations rather than silently diverging.
  • /validate_planbe rigorous. Check every success criterion. Report pass/fail per criterion. Identify deviations, missing implementations, and regressions. Gather evidence from diffs, test results, and codebase state.
  • /describe_prfollow the template. Read the team’s PR template, fill out every section, run verification commands specified in the template, and update the PR via gh pr edit.
  • /founder_moderetroactive workflow compliance. Cherry-pick the commit to a new branch, create a JIRA ticket, open a proper PR. Bring already-done work back into the team’s process.

These constraints are things you’d have to type out every time in a direct conversation, and they’d drift or get forgotten within a few exchanges. Commands make them permanent.

The default behavior: When a session gets long, context fills up. When you start a new session, you lose everything from the last one. Claude Code’s auto-memory captures some preferences, but not working state. Copilot’s session memory is gone entirely.

What commands do differently:

  • /create_handoff compacts your current session into a structured handoff document — what was done, what’s left, key decisions made, files touched, blockers encountered. Saved to thoughts/shared/handoffs/.
  • /resume_handoff reads that document, restores context by reading all referenced files, and continues where the previous session stopped.

This isn’t memory in the “the model remembers things” sense. It’s explicit context transfer — a document that captures working state, written by one session and consumed by the next. The base tools have nothing equivalent.

The default behavior: All three tools can run git commands. But they don’t have opinions about how to use git. They’ll commit everything with git add ., write generic commit messages, and create PRs without following a template.

What commands do differently:

  • /commit reviews changes, groups related files into atomic commits, drafts messages in imperative mood focused on the “why,” and presents the plan for your approval before executing. It uses specific git add paths, never -A or .. It never adds AI attribution lines.
  • /describe_pr reads your team’s PR template from the thoughts directory, analyzes the full diff and commit history, runs any verification commands specified in the template (like make check test), generates a description filling every section, saves it to thoughts, and updates the PR on GitHub.
  • /local_review sets up a git worktree for reviewing a colleague’s branch — resolving PR numbers to branch names, extracting ticket info, creating the worktree, installing dependencies, and launching a new AI session in the isolated checkout.

These aren’t just “run git commands.” They’re opinionated workflows that encode how your team uses git.

The default behavior: Claude Code’s custom commands live in ~/.claude/commands/. Copilot’s live elsewhere. If you switch tools, you start over.

What Hyprlayer does differently:

The same commands and agents are installed for whichever tool you’re using — Claude Code, GitHub Copilot, or OpenCode. The thoughts directory is tool-agnostic. A plan created with Claude Code can be implemented with Copilot. A handoff written in one tool can be resumed in another. The workflow doesn’t lock you into a specific AI tool.

Commands aren’t always the right choice:

  • Quick one-off questions — “What does this function do?” doesn’t need /research_codebase
  • Simple changes — Renaming a variable or fixing a typo doesn’t need a plan
  • Exploration — Sometimes you want to think out loud with the model before committing to a structured workflow

The commands are designed for non-trivial work where the structure pays for itself. Use your judgment.