Co-Pilot
Updated a month ago

review-implementing

Mmhattingpete
0.2k
mhattingpete/claude-skills-marketplace/engineering-workflow-plugin/skills/review-implementing
80
Agent Score

💡 Summary

An AI agent skill that systematically parses and implements code review feedback into actionable tasks and code changes.

🎯 Target Audience

Software EngineersTech LeadsOpen Source MaintainersDevOps EngineersCode Reviewers

🤖 AI Roast:It's a glorified to-do list manager for code reviews, turning human feedback into a robot's checklist with the enthusiasm of a spreadsheet.

Security AnalysisLow Risk

The skill executes code editing and file system operations (Grep, Glob, Edit, Write). The main risk is unauthorized or erroneous modification of source code, potentially introducing vulnerabilities or breaking functionality. Mitigation: Implement a mandatory dry-run or confirmation step for changes outside a designated 'sandbox' directory and integrate with version control to allow easy rollback.


name: review-implementing description: Process and implement code review feedback systematically. Use when user provides reviewer comments, PR feedback, code review notes, or asks to implement suggestions from reviews.

Review Feedback Implementation

Systematically process and implement changes based on code review feedback.

When to Use

  • Provides reviewer comments or feedback
  • Pastes PR review notes
  • Mentions implementing review suggestions
  • Says "address these comments" or "implement feedback"
  • Shares list of changes requested by reviewers

Systematic Workflow

1. Parse Reviewer Notes

Identify individual feedback items:

  • Split numbered lists (1., 2., etc.)
  • Handle bullet points or unnumbered feedback
  • Extract distinct change requests
  • Clarify ambiguous items before starting

2. Create Todo List

Use TodoWrite tool to create actionable tasks:

  • Each feedback item becomes one or more todos
  • Break down complex feedback into smaller tasks
  • Make tasks specific and measurable
  • Mark first task as in_progress before starting

Example:

- Add type hints to extract function
- Fix duplicate tag detection logic
- Update docstring in chain.py
- Add unit test for edge case

3. Implement Changes Systematically

For each todo item:

Locate relevant code:

  • Use Grep to search for functions/classes
  • Use Glob to find files by pattern
  • Read current implementation

Make changes:

  • Use Edit tool for modifications
  • Follow project conventions (CLAUDE.md)
  • Preserve existing functionality unless changing behavior

Verify changes:

  • Check syntax correctness
  • Run relevant tests if applicable
  • Ensure changes address reviewer's intent

Update status:

  • Mark todo as completed immediately after finishing
  • Move to next todo (only one in_progress at a time)

4. Handle Different Feedback Types

Code changes:

  • Use Edit tool for existing code
  • Follow type hint conventions (PEP 604/585)
  • Maintain consistent style

New features:

  • Create new files with Write tool if needed
  • Add corresponding tests
  • Update documentation

Documentation:

  • Update docstrings following project style
  • Modify markdown files as needed
  • Keep explanations concise

Tests:

  • Write tests as functions, not classes
  • Use descriptive names
  • Follow pytest conventions

Refactoring:

  • Preserve functionality
  • Improve code structure
  • Run tests to verify no regressions

5. Validation

After implementing changes:

  • Run affected tests
  • Check for linting errors: uv run ruff check
  • Verify changes don't break existing functionality

6. Communication

Keep user informed:

  • Update todo list in real-time
  • Ask for clarification on ambiguous feedback
  • Report blockers or challenges
  • Summarize changes at completion

Edge Cases

Conflicting feedback:

  • Ask user for guidance
  • Explain conflict clearly

Breaking changes required:

  • Notify user before implementing
  • Discuss impact and alternatives

Tests fail after changes:

  • Fix tests before marking todo complete
  • Ensure all related tests pass

Referenced code doesn't exist:

  • Ask user for clarification
  • Verify understanding before proceeding

Important Guidelines

  • Always use TodoWrite for tracking progress
  • Mark todos completed immediately after each item
  • Only one todo in_progress at any time
  • Don't batch completions - update status in real-time
  • Ask questions for unclear feedback
  • Run tests if changes affect tested code
  • Follow CLAUDE.md conventions for all code changes
  • Use conventional commits if creating commits afterward
5-Dim Analysis
Clarity8/10
Novelty6/10
Utility9/10
Completeness9/10
Maintainability8/10
Pros & Cons

Pros

  • Provides a structured, repeatable workflow for handling feedback.
  • Reduces human error in tracking and implementing review items.
  • Integrates with existing tools (Grep, Edit) for context-aware changes.
  • Promotes clear communication and status updates.

Cons

  • May struggle with highly ambiguous or subjective feedback requiring human judgment.
  • Relies on the user to provide well-structured initial input.
  • Could be over-engineered for very small, straightforward review changes.
  • Adds process overhead for trivial fixes.

Related Skills

useful-ai-prompts

A
toolCo-Pilot
88/ 100

“A treasure trove of prompts, but don’t expect them to write your novel for you.”

fastmcp

A
toolCo-Pilot
86/ 100

“FastMCP: because who doesn't love a little complexity with their AI?”

python-pro

A
toolCo-Pilot
86/ 100

“Powerful, but the setup might scare off the impatient.”

Disclaimer: This content is sourced from GitHub open source projects for display and rating purposes only.

Copyright belongs to the original author mhattingpete.