Development Tools
Core Tools
Section titled “Core Tools”Claude Code (Anthropic)
Section titled “Claude Code (Anthropic)”The reference commercial tool for agentic coding. Terminal integration, MCP support, GitHub integration.
# Installpnpm add -g @anthropic-ai/claude-code
# Basic usageclaude "Build a Python calculator"
# Include file contextclaude --file=PROMPT.md
# Headless mode (for Ralph Loop)cat PROMPT.md | claude --headless# /loop — schedule-based autonomous agent loopclaude /loop "find and fix failing tests" --every 2h --for 3d
# Check loop status / stopclaude /loop --statusclaude /loop --stopKey features:
- MCP server integration (
~/.claude/settings.json) CLAUDE.mdper-project instruction file/loopschedule-based autonomous loop (git worktree isolation, up to 3 days)- Multi-file editing
- GitHub Actions integration
Advanced features:
Planning & Control
Section titled “Planning & Control”- Plan Mode (
Shift+Tab) — Draft a plan before writing code. Once confirmed, auto-executes the plan.Terminal window # In an interactive session, press Shift+Tab to enter Plan Mode# Draft plan → confirm → auto-execute - Effort Levels — Adjust reasoning depth. Use low effort for simple tasks to save cost; high effort for complex design work.
Terminal window claude --effort low "What is the return type of this function?"claude --effort high "Refactor this module to be async" - Output Styles — Cognitive mode presets: Explanatory, Learning, Concise, etc.
Terminal window claude --output-style concise "Analyze test failure cause"claude --output-style explanatory "Explain the MCP protocol"
Extension System
Section titled “Extension System”- Custom Agents — Define specialized agents in
.claude/agents/*.md. Declaratively set role, allowed tools, and permissions.Terminal window # After defining a role in .claude/agents/qa-reviewer.md:claude --agent qa-reviewer "Review this PR"// Set default agent in settings.json{ "defaultAgent": "qa-reviewer" } - Skills — Installable
.mdskill files. Place in~/.claude/skills/and load within a session.Terminal window # Load a skillclaude /skills refactor-guide - Hooks — Shell commands triggered by events (tool calls, etc.). Configured in settings.json.
~/.claude/settings.json {"hooks": {"PreToolUse": [{"matcher": "Bash","command": "echo '$(date): Bash called' >> ~/.claude/audit.log"}]}}
Isolation & Safety
Section titled “Isolation & Safety”- Sandboxing (
/sandbox) — Isolate BashTool file and network access. Limits blast radius of agent mistakes.Terminal window # Activate sandbox modeclaude --sandbox - Worktree Native (
--worktree) — Git worktree-based isolated sessions. Supports tmux integration for background execution.Terminal window # Start an isolated worktree sessionclaude --worktree feature-auth# Run in background via tmuxclaude --worktree feature-auth --tmux
Parallel Execution
Section titled “Parallel Execution”- Parallel Sessions — Run multiple Claude Code instances simultaneously. Each session maintains an independent context.
Terminal window # Terminal 1: frontend workclaude --worktree frontend "Implement React components"# Terminal 2: backend workclaude --worktree backend "Implement API endpoints" - /batch — Interactive planning followed by worktree-isolated parallel execution. Each agent tests and opens an individual PR.
Terminal window claude /batch "Migrate logging in src/ to the new structured logger" - /simplify — Parallel-agent code review across reuse, quality, and efficiency dimensions.
Terminal window claude /simplify
For detailed explanations and hands-on examples, see Week 4 (loops/worktree), Week 6 (instruction tuning), and Week 7 (multi-agent design).
Gemini CLI (Google)
Section titled “Gemini CLI (Google)”Google’s AI coding CLI with a free tier. 1M token context window, MCP support.
# Installpnpm add -g @google/gemini-cli
# Interactive modegemini
# Pipe mode (headless)cat PROMPT.md | geminiKey features:
- MCP server integration (
~/.gemini/settings.json) GEMINI.mdper-project instruction file- Free tier: 1,000 req/day
- 1M token context window
Codex CLI (OpenAI)
Section titled “Codex CLI (OpenAI)”OpenAI’s terminal-based coding agent. Built-in sandbox for safe automated execution.
# Installpnpm add -g @openai/codex
# Basic usagecodex "Build a Python calculator"
# Auto-approval mode (for Ralph Loop)codex --approval-mode full-auto "$(cat PROMPT.md)"Key features:
- Built-in sandbox (safest automated execution)
AGENTS.mdper-project instruction file- No MCP support
- Requires ChatGPT Plus or API key
OpenCode
Section titled “OpenCode”Open-source TUI-based AI coding tool. Supports multiple model backends including local models.
# Install (macOS)brew install opencode
# TUI modeopencode
# API server modeopencode serveKey features:
- TUI (Terminal UI) interface
- Multiple backends: OpenAI, Anthropic, local models (Ollama), and more
- Free when using local models (no API cost)
- Limited MCP support
For a detailed comparison of tools, see the AI Coding Tool Selection Guide.
High-throughput LLM inference server with an OpenAI-compatible API.
# Installpip install vllm
# Start serverpython -m vllm.entrypoints.openai.api_server \ --model deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct \ --port 8000
# Client usage (OpenAI-compatible)from openai import OpenAIclient = OpenAI(base_url="http://localhost:8000/v1", api_key="token")Graphify
Section titled “Graphify”An AI coding assistant skill that transforms codebases and documents into queryable knowledge graphs. Analyzes code structure using tree-sitter AST extraction (23 languages, no LLM calls needed), then builds community detection and interactive visualizations with NetworkX + Leiden clustering.
# Installpip install graphify-ai
# Build codebase graphgraphify build ./src --output graph.json
# Generate interactive visualizationgraphify visualize graph.json --output graph.htmlKey features:
- tree-sitter based local AST analysis (code never leaves your machine)
- 71.5x token reduction on mixed corpora vs raw file reading
- Confidence tagging: EXTRACTED / INFERRED / AMBIGUOUS
- SHA256 cache-based incremental updates
- Integration with Claude Code, Gemini CLI, Codex, OpenCode, and more
For usage in context management, see Week 5 lecture.
Ollama
Section titled “Ollama”Local and cloud LLM deployment tool. Run models with a single command, with NVIDIA cloud GPU remote inference support.
# Install (macOS)brew install ollama
# Install (Linux)curl -fsSL https://ollama.com/install.sh | sh
# Run local modelollama run gemma4:31b
# Run cloud model (no GPU required)ollama launch claude --model gemma4:31b-cloud
# Connect to AI coding CLIollama launch claude --model glm-5.1:cloudModel Context Protocol (MCP)
Section titled “Model Context Protocol (MCP)”The standard protocol for connecting agents to external tools.
Key MCP servers:
| Server | Function | Install |
|---|---|---|
@modelcontextprotocol/server-filesystem | File read/write | npx |
mcp-server-git | Git operations | uvx |
mcp-server-github | GitHub API | uvx |
mcp-server-postgres | PostgreSQL | uvx |
// Gemini CLI: ~/.gemini/settings.json{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/project"] }, "git": { "command": "uvx", "args": ["mcp-server-git", "--repository", "."] } }}OpenTelemetry
Section titled “OpenTelemetry”The standard for collecting telemetry from agent systems.
pip install opentelemetry-sdk opentelemetry-exporter-prometheusOpen-Source Coding LLMs
Section titled “Open-Source Coding LLMs”Major coding models that can be deployed locally. Served via vLLM or SGLang with an OpenAI-compatible API.
| Model | Parameters | Active | Context | HuggingFace |
|---|---|---|---|---|
| Gemma 4 | 31B (Dense) | Full | 256K | google/gemma-4-31b-it |
| GLM-5.1 | Undisclosed | Undisclosed | 198K | API-only (current) |
| Qwen3-Coder | 235B (MoE) | 22B | 128K | Qwen/Qwen3-Coder-32B-Instruct |
| DeepSeek V3 | 685B (MoE) | 37B | 128K | deepseek-ai/DeepSeek-V3 |
| GLM-4.7 | ~32B (Dense) | Full | 128K | THUDM/glm-4-9b-chat |
| MiniMax M2.1 | 230B (MoE) | 10B | 128K | MiniMax/MiniMax-M2.1 |
| DeepSeek-Coder-V2 | 236B (MoE) | 21B | 128K | deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct |
| Qwen3 14B/8B | 14B/8B | Full | 128K | Qwen/Qwen3-14B, Qwen/Qwen3-8B |
For detailed per-model comparisons and hardware requirements, see Week 10 lecture.
Other Useful Tools
Section titled “Other Useful Tools”| Tool | Purpose | Install |
|---|---|---|
uv | Python package manager (pip alternative) | pip install uv |
Ruff | Python linter/formatter | pip install ruff |
pytest | Python test framework | pip install pytest |
mypy | Python type checker | pip install mypy |
httpx | Async HTTP client | pip install httpx |