claude-code-showcase
π‘ Summary
Claude Code automates code quality and project management tasks using AI-driven agents and skills.
π― Target Audience
π€ AI Roast: βPowerful, but the setup might scare off the impatient.β
Risk: Medium. Review: shell/CLI command execution; outbound network access (SSRF, data egress); API keys/tokens handling and storage; filesystem read/write scope and path traversal; dependency pinning and supply-chain risk. Run with least privilege and audit before enabling in production.
Claude Code Project Configuration Showcase
Most software engineers are seriously sleeping on how good LLM agents are right now, especially something like Claude Code.
Once you've got Claude Code set up, you can point it at your codebase, have it learn your conventions, pull in best practices, and refine everything until it's basically operating like a super-powered teammate. The real unlock is building a solid set of reusable "skills" plus a few "agents" for the stuff you do all the time.
What This Looks Like in Practice
Custom UI Library? We have a skill that explains exactly how to use it. Same for how we write tests, how we structure GraphQL, and basically how we want everything done in our repo. So when Claude generates code, it already matches our patterns and standards out of the box.
Automated Quality Gates? We use hooks to auto-format code, run tests when test files change, type-check TypeScript, and even block edits on the main branch. Claude Code also created a bunch of ESLint automation, including custom rules and lint checks that catch issues before they hit review.
Deep Code Review? We have a code review agent that Claude runs after changes are made. It follows a detailed checklist covering TypeScript strict mode, error handling, loading states, mutation patterns, and more. When a PR goes up, we have a GitHub Action that does a full PR review automatically.
Scheduled Maintenance? We've got GitHub workflow agents that run on a schedule:
- Monthly docs sync - Reads commits from the last month and makes sure docs are still aligned
- Weekly code quality - Reviews random directories and auto-fixes issues
- Biweekly dependency audit - Safe dependency updates with test verification
Intelligent Skill Suggestions? We built a skill evaluation system that analyzes every prompt and automatically suggests which skills Claude should activate based on keywords, file paths, and intent patterns.
A ton of maintenance and quality work is just... automated. It runs ridiculously smoothly.
JIRA/Linear Integration? We connect Claude Code to our ticket system via MCP servers. Now Claude can read the ticket, understand the requirements, implement the feature, update the ticket status, and even create new tickets if it finds bugs along the way. The /ticket command handles the entire workflowβfrom reading acceptance criteria to linking the PR back to the ticket.
We even use Claude Code for ticket triage. It reads the ticket, digs into the codebase, and leaves a comment with what it thinks should be done. So when an engineer picks it up, they're basically starting halfway through already.
There is so much low-hanging fruit here that it honestly blows my mind people aren't all over it.
Table of Contents
- Directory Structure
- Quick Start
- Configuration Reference
- GitHub Actions Workflows
- Best Practices
- Examples in This Repository
Directory Structure
your-project/
βββ CLAUDE.md # Project memory (alternative location)
βββ .mcp.json # MCP server configuration (JIRA, GitHub, etc.)
βββ .claude/
β βββ settings.json # Hooks, environment, permissions
β βββ settings.local.json # Personal overrides (gitignored)
β βββ settings.md # Human-readable hook documentation
β βββ .gitignore # Ignore local/personal files
β β
β βββ agents/ # Custom AI agents
β β βββ code-reviewer.md # Proactive code review agent
β β
β βββ commands/ # Slash commands (/command-name)
β β βββ onboard.md # Deep task exploration
β β βββ pr-review.md # PR review workflow
β β βββ ...
β β
β βββ hooks/ # Hook scripts
β β βββ skill-eval.sh # Skill matching on prompt submit
β β βββ skill-eval.js # Node.js skill matching engine
β β βββ skill-rules.json # Pattern matching configuration
β β
β βββ skills/ # Domain knowledge documents
β β βββ README.md # Skills overview
β β βββ testing-patterns/
β β β βββ SKILL.md
β β βββ graphql-schema/
β β β βββ SKILL.md
β β βββ ...
β β
β βββ rules/ # Modular instructions (optional)
β βββ code-style.md
β βββ security.md
β
βββ .github/
βββ workflows/
βββ pr-claude-code-review.yml # Auto PR review
βββ scheduled-claude-code-docs-sync.yml # Monthly docs sync
βββ scheduled-claude-code-quality.yml # Weekly quality review
βββ scheduled-claude-code-dependency-audit.yml
Quick Start
1. Create the .claude directory
mkdir -p .claude/{agents,commands,hooks,skills}
2. Add a CLAUDE.md file
Create CLAUDE.md in your project root with your project's key information. See CLAUDE.md for a complete example.
# Project Name ## Quick Facts - **Stack**: React, TypeScript, Node.js - **Test Command**: `npm run test` - **Lint Command**: `npm run lint` ## Key Directories - `src/components/` - React components - `src/api/` - API layer - `tests/` - Test files ## Code Style - TypeScript strict mode - Prefer interfaces over types - No `any` - use `unknown`
3. Add settings.json with hooks
Create .claude/settings.json. See settings.json for a full example with auto-formatting, testing, and more.
{ "hooks": { "PreToolUse": [ { "matcher": "Edit|Write", "hooks": [ { "type": "command", "command": "[ \"$(git branch --show-current)\" != \"main\" ] || { echo '{\"block\": true, \"message\": \"Cannot edit on main branch\"}' >&2; exit 2; }", "timeout": 5 } ] } ] } }
4. Add your first skill
Create .claude/skills/testing-patterns/SKILL.md. See testing-patterns/SKILL.md for a comprehensive example.
--- name: testing-patterns description: Jest testing patterns for this project. Use when writing tests, creating mocks, or following TDD workflow. --- # Testing Patterns ## Test Structure - Use `describe` blocks for grouping - Use `it` for individual tests - Follow AAA pattern: Arrange, Act, Assert ## Mocking - Use factory functions: `getMockUser(overrides)` - Mock external dependencies, not internal modules
Tip: The
descriptionfield is criticalβClaude uses it to decide when to apply the skill. Include keywords users would naturally mention.
Configuration Reference
CLAUDE.md - Project Memory
CLAUDE.md is Claude's persistent memory that loads automatically at session start.
Locations (in order of precedence):
.claude/CLAUDE.md(project, in .claude folder)./CLAUDE.md(project root)~/.claude/CLAUDE.md(user-level, all projects)
What to include:
- Project stack and architecture overview
- Key commands (test, build, lint, deploy)
- Code style guidelines
- Important directories and their purposes
- Critical rules and constraints
π Example: CLAUDE.md
settings.json - Hooks & Environment
The main configuration file for hooks, environment variables, and permissions.
Location: .claude/settings.json
π Example: settings.json | Human-readable docs
Hook Events
| Event | When It Fires | Use Case |
|-------|---------------|----------|
| PreToolUse | Before tool execution | Block edits on main, validate commands |
| PostToolUse | After tool completes | Auto-format, run tests, lint |
| UserPromptSubmit | User submits prompt | Add context, suggest skills |
| Stop | Agent finishes | Decide if Claude should continue |
Hook Response Format
{ "block": true, // Block the action (PreToolUse only) "message": "Reason", // Message to show user "feedback": "Info", // Non-blocking feedback "suppressOutput": true, // Hide command output "continue": false // Whether to continue }
Exit Codes
0- Success2- Blocking error (PreToolUse only, blocks the tool)- Other - Non-blocking error
MCP Servers - External Integrations
MCP (Model Context Protocol) servers let Claude Code connect to external tools like JIRA, GitHub, Slack, databases, and more. This is how you enable workflows like "read a ticket, implement it, and update the ticket status."
Location: .mcp.json (project root, committed to git for team sharing)
π Example: .mcp.json
How MCP Works
βββ
Pros
- Automates repetitive tasks
- Enhances code quality through AI
- Integrates with existing tools like JIRA
- Supports custom skills and agents
Cons
- Initial setup can be complex
- May require ongoing maintenance
- Dependency on external integrations
- Learning curve for new users
Related Skills
useful-ai-prompts
AβA treasure trove of prompts, but donβt expect them to write your novel for you.β
fastmcp
AβFastMCP: because who doesn't love a little complexity with their AI?β
python-pro
AβPowerful, but the setup might scare off the impatient.β
Disclaimer: This content is sourced from GitHub open source projects for display and rating purposes only.
Copyright belongs to the original author ChrisWiles.
