Skip to content

Development Tools

The reference commercial tool for agentic coding. Terminal integration, MCP support, GitHub integration.

Terminal window
# Install
pnpm add -g @anthropic-ai/claude-code
# Basic usage
claude "Build a Python calculator"
# Include file context
claude --file=PROMPT.md
# Headless mode (for Ralph Loop)
cat PROMPT.md | claude --headless
Terminal window
# /loop — schedule-based autonomous agent loop
claude /loop "find and fix failing tests" --every 2h --for 3d
# Check loop status / stop
claude /loop --status
claude /loop --stop

Key features:

  • MCP server integration (~/.claude/settings.json)
  • CLAUDE.md per-project instruction file
  • /loop schedule-based autonomous loop (git worktree isolation, up to 3 days)
  • Multi-file editing
  • GitHub Actions integration

Advanced features:

  • Plan Mode (Shift+Tab) — Draft a plan before writing code. Once confirmed, auto-executes the plan.
    Terminal window
    # In an interactive session, press Shift+Tab to enter Plan Mode
    # Draft plan → confirm → auto-execute
  • Effort Levels — Adjust reasoning depth. Use low effort for simple tasks to save cost; high effort for complex design work.
    Terminal window
    claude --effort low "What is the return type of this function?"
    claude --effort high "Refactor this module to be async"
  • Output Styles — Cognitive mode presets: Explanatory, Learning, Concise, etc.
    Terminal window
    claude --output-style concise "Analyze test failure cause"
    claude --output-style explanatory "Explain the MCP protocol"
  • Custom Agents — Define specialized agents in .claude/agents/*.md. Declaratively set role, allowed tools, and permissions.
    Terminal window
    # After defining a role in .claude/agents/qa-reviewer.md:
    claude --agent qa-reviewer "Review this PR"
    // Set default agent in settings.json
    { "defaultAgent": "qa-reviewer" }
  • Skills — Installable .md skill files. Place in ~/.claude/skills/ and load within a session.
    Terminal window
    # Load a skill
    claude /skills refactor-guide
  • Hooks — Shell commands triggered by events (tool calls, etc.). Configured in settings.json.
    ~/.claude/settings.json
    {
    "hooks": {
    "PreToolUse": [
    {
    "matcher": "Bash",
    "command": "echo '$(date): Bash called' >> ~/.claude/audit.log"
    }
    ]
    }
    }
  • Sandboxing (/sandbox) — Isolate BashTool file and network access. Limits blast radius of agent mistakes.
    Terminal window
    # Activate sandbox mode
    claude --sandbox
  • Worktree Native (--worktree) — Git worktree-based isolated sessions. Supports tmux integration for background execution.
    Terminal window
    # Start an isolated worktree session
    claude --worktree feature-auth
    # Run in background via tmux
    claude --worktree feature-auth --tmux
  • Parallel Sessions — Run multiple Claude Code instances simultaneously. Each session maintains an independent context.
    Terminal window
    # Terminal 1: frontend work
    claude --worktree frontend "Implement React components"
    # Terminal 2: backend work
    claude --worktree backend "Implement API endpoints"
  • /batch — Interactive planning followed by worktree-isolated parallel execution. Each agent tests and opens an individual PR.
    Terminal window
    claude /batch "Migrate logging in src/ to the new structured logger"
  • /simplify — Parallel-agent code review across reuse, quality, and efficiency dimensions.
    Terminal window
    claude /simplify

For detailed explanations and hands-on examples, see Week 4 (loops/worktree), Week 6 (instruction tuning), and Week 7 (multi-agent design).


Google’s AI coding CLI with a free tier. 1M token context window, MCP support.

Terminal window
# Install
pnpm add -g @google/gemini-cli
# Interactive mode
gemini
# Pipe mode (headless)
cat PROMPT.md | gemini

Key features:

  • MCP server integration (~/.gemini/settings.json)
  • GEMINI.md per-project instruction file
  • Free tier: 1,000 req/day
  • 1M token context window

OpenAI’s terminal-based coding agent. Built-in sandbox for safe automated execution.

Terminal window
# Install
pnpm add -g @openai/codex
# Basic usage
codex "Build a Python calculator"
# Auto-approval mode (for Ralph Loop)
codex --approval-mode full-auto "$(cat PROMPT.md)"

Key features:

  • Built-in sandbox (safest automated execution)
  • AGENTS.md per-project instruction file
  • No MCP support
  • Requires ChatGPT Plus or API key

Open-source TUI-based AI coding tool. Supports multiple model backends including local models.

Terminal window
# Install (macOS)
brew install opencode
# TUI mode
opencode
# API server mode
opencode serve

Key features:

  • TUI (Terminal UI) interface
  • Multiple backends: OpenAI, Anthropic, local models (Ollama), and more
  • Free when using local models (no API cost)
  • Limited MCP support

For a detailed comparison of tools, see the AI Coding Tool Selection Guide.


High-throughput LLM inference server with an OpenAI-compatible API.

Terminal window
# Install
pip install vllm
# Start server
python -m vllm.entrypoints.openai.api_server \
--model deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct \
--port 8000
# Client usage (OpenAI-compatible)
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="token")

An AI coding assistant skill that transforms codebases and documents into queryable knowledge graphs. Analyzes code structure using tree-sitter AST extraction (23 languages, no LLM calls needed), then builds community detection and interactive visualizations with NetworkX + Leiden clustering.

Terminal window
# Install
pip install graphify-ai
# Build codebase graph
graphify build ./src --output graph.json
# Generate interactive visualization
graphify visualize graph.json --output graph.html

Key features:

  • tree-sitter based local AST analysis (code never leaves your machine)
  • 71.5x token reduction on mixed corpora vs raw file reading
  • Confidence tagging: EXTRACTED / INFERRED / AMBIGUOUS
  • SHA256 cache-based incremental updates
  • Integration with Claude Code, Gemini CLI, Codex, OpenCode, and more

For usage in context management, see Week 5 lecture.


Local and cloud LLM deployment tool. Run models with a single command, with NVIDIA cloud GPU remote inference support.

Terminal window
# Install (macOS)
brew install ollama
# Install (Linux)
curl -fsSL https://ollama.com/install.sh | sh
# Run local model
ollama run gemma4:31b
# Run cloud model (no GPU required)
ollama launch claude --model gemma4:31b-cloud
# Connect to AI coding CLI
ollama launch claude --model glm-5.1:cloud

The standard protocol for connecting agents to external tools.

Key MCP servers:

ServerFunctionInstall
@modelcontextprotocol/server-filesystemFile read/writenpx
mcp-server-gitGit operationsuvx
mcp-server-githubGitHub APIuvx
mcp-server-postgresPostgreSQLuvx
~/.claude/settings.json
// Gemini CLI: ~/.gemini/settings.json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/project"]
},
"git": {
"command": "uvx",
"args": ["mcp-server-git", "--repository", "."]
}
}
}

The standard for collecting telemetry from agent systems.

pip install opentelemetry-sdk opentelemetry-exporter-prometheus

Major coding models that can be deployed locally. Served via vLLM or SGLang with an OpenAI-compatible API.

ModelParametersActiveContextHuggingFace
Gemma 431B (Dense)Full256Kgoogle/gemma-4-31b-it
GLM-5.1UndisclosedUndisclosed198KAPI-only (current)
Qwen3-Coder235B (MoE)22B128KQwen/Qwen3-Coder-32B-Instruct
DeepSeek V3685B (MoE)37B128Kdeepseek-ai/DeepSeek-V3
GLM-4.7~32B (Dense)Full128KTHUDM/glm-4-9b-chat
MiniMax M2.1230B (MoE)10B128KMiniMax/MiniMax-M2.1
DeepSeek-Coder-V2236B (MoE)21B128Kdeepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
Qwen3 14B/8B14B/8BFull128KQwen/Qwen3-14B, Qwen/Qwen3-8B

For detailed per-model comparisons and hardware requirements, see Week 10 lecture.


ToolPurposeInstall
uvPython package manager (pip alternative)pip install uv
RuffPython linter/formatterpip install ruff
pytestPython test frameworkpip install pytest
mypyPython type checkerpip install mypy
httpxAsync HTTP clientpip install httpx