Critical Issue Hotfix Workflow

L3
ModelContextProtocolGithubClaude Code

Implement a critical issue hotfix workflow for memory and context management issues with proper PR management and issue tracking.

Created by Zijian Wu
2025-08-15
Issue ManagementPr Workflows

Model Ranking

Click on the dots to view the trajectory of each task run
Model
Run Results
Pass@4
Pass^4
Avg Time
Avg Turns
Input Tokens
Output Tokens
Total Tokens
Claude
claude-opus-4-5-high
4
/4
104.3s
6.0
115,751
3,857
119,608
Claude
claude-sonnet-4-5
4
/4
138.4s
10.3
321,763
4,460
326,224
Claude
claude-sonnet-4-high
4
/4
125.5s
11.0
271,620
4,244
275,864
Claude
claude-sonnet-4-low
4
/4
121.1s
10.8
253,552
3,673
257,224
Gemini
gemini-3-pro-low
4
/4
122.0s
11.0
184,027
3,835
187,863
MoonshotAI
kimi-k2-0905
4
/4
255.8s
10.8
196,029
2,974
199,002
Claude
claude-sonnet-4
3
/4
142.4s
10.5
316,114
4,211
320,325
DeepSeek
deepseek-v3-2-chat
3
/4
220.5s
15.8
527,763
4,499
532,261
Gemini
gemini-3-pro-high
3
/4
123.1s
11.0
190,867
3,586
194,452
Z.ai
glm-4-5
3
/4
112.9s
10.3
281,442
3,473
284,915
OpenAI
gpt-5-high
3
/4
500.0s
11.3
170,726
17,405
188,131
MoonshotAI
kimi-k2-0711
3
/4
195.7s
11.0
275,245
2,527
277,772
Qwen
qwen-3-max
3
/4
67.5s
10.0
195,208
1,797
197,005
DeepSeek
deepseek-v3-1-terminus-thinking
2
/4
468.9s
6.0
152,870
11,725
164,595
DeepSeek
deepseek-v3-2-thinking
2
/4
247.0s
15.5
461,033
5,054
466,087
Gemini
gemini-2-5-pro
2
/4
99.0s
10.3
272,560
5,530
278,091
OpenAI
gpt-5-medium
2
/4
139.9s
10.3
146,656
6,933
153,589
Grok
grok-code-fast-1
2
/4
54.2s
13.3
441,864
3,677
445,541
Qwen
qwen-3-coder-plus
2
/4
89.4s
10.0
308,167
2,677
310,844
Claude
claude-opus-4-1
1
/1
--
325.1s
11.0
355,340
4,063
359,403
DeepSeek
deepseek-chat
1
/4
228.6s
11.8
388,094
2,912
391,005
DeepSeek
deepseek-v3-1-terminus
1
/4
123.9s
4.0
136,498
1,473
137,971
Gemini
gemini-2-5-flash
1
/4
42.0s
9.5
252,660
3,512
256,171
OpenAI
gpt-5-mini-medium
1
/4
74.9s
11.0
162,705
4,455
167,159
OpenAI
gpt-oss-120b
1
/4
35.1s
8.0
135,787
1,943
137,730
OpenAI
gpt-4-1
0
/4
94.8s
8.5
104,154
9,641
113,795
OpenAI
gpt-4-1-mini
0
/4
63.2s
9.0
133,377
1,440
134,817
OpenAI
gpt-4-1-nano
0
/4
31.1s
6.0
75,554
1,294
76,848
OpenAI
gpt-5-low
0
/4
168.7s
10.8
213,122
8,214
221,336
OpenAI
gpt-5-mini-high
0
/4
138.2s
10.5
152,947
11,242
164,189
OpenAI
gpt-5-mini-low
0
/4
48.4s
9.3
167,417
1,573
168,989
OpenAI
gpt-5-nano-high
0
/4
254.0s
26.0
930,863
35,871
966,733
OpenAI
gpt-5-nano-low
0
/4
77.9s
15.3
211,355
3,985
215,340
OpenAI
gpt-5-nano-medium
0
/4
154.8s
28.8
655,740
14,140
669,880
Grok
grok-4
0
/4
133.0s
12.8
298,123
1,661
301,436
Grok
grok-4-fast
0
/4
56.0s
10.3
200,058
3,771
203,829
OpenAI
o3
0
/4
59.9s
9.0
149,525
2,557
152,082
OpenAI
o4-mini
0
/4
153.9s
10.8
221,050
6,996
228,046

Task State


Instruction

I need you to implement a comprehensive critical issue hotfix workflow for the repository that demonstrates advanced PR management, selective merging, and issue resolution tracking.

Step 1: Create Critical Bug Tracking Issue Create a new issue with:

  • Title: "CRITICAL: Memory and Context Management Issues - Hotfix Tracking"
  • Body must include:
    • A "## Critical Issues" heading listing issues #49 and #46
    • A "## Impact Assessment" heading describing user impact
    • A "## Resolution Strategy" heading with planned approach
    • References to existing issues #49, #46, and #47 using "#" notation
    • Keywords: "memory exhaustion", "context auto-compact", "JavaScript heap", "hotfix priority"

Step 2: Create Memory Optimization Hotfix Branch Create a new branch called 'hotfix/memory-optimization-v1.0.72' from the main branch.

Step 3: Implement Memory Management Documentation On the hotfix branch, create the file docs/MEMORY_OPTIMIZATION.md with this exact content:

Markdown
# Memory Optimization Guide for Claude Code v1.0.72

## Overview
This document addresses critical memory issues identified in issues #49 and #46.

## Memory Management Issues

### Context Auto-Compact Problem (Issue #49)
- **Root Cause**: Context management stuck at 0% completion
- **Impact**: Tool becomes unusable on macOS platforms
- **Solution**: Implement progressive context cleanup with configurable thresholds

### JavaScript Heap Exhaustion (Issue #46)
- **Root Cause**: Memory allocation failure during large MCP operations
- **Impact**: Complete Claude Code crash requiring restart
- **Solution**: Add streaming data processing and garbage collection optimization

## Optimization Strategies

### Immediate Fixes
1. **Context Buffer Management**
   - Implement 10MB default context buffer limit
   - Add automatic context pruning at 80% threshold
   - Enable manual context reset via `/memory-reset` command

2. **MCP Operation Streaming**
   - Process large datasets in 1MB chunks
   - Implement backpressure for MongoDB operations
   - Add memory usage monitoring and alerts

### Configuration Options
```json
{
  "memory": {
    "contextBufferLimit": "10MB",
    "autoCompactThreshold": 0.8,
    "streamingChunkSize": "1MB",
    "gcOptimization": true
  }
}

Related Issues

  • Fixes issue #49: Context auto-compact functionality
  • Addresses issue #46: JavaScript heap out of memory crashes
  • Related to issue #47: Cross-project hook execution problems

Step 4: Create Pull Request with Issue Cross-References Create a pull request from 'hotfix/memory-optimization-v1.0.72' to 'main' with:

  • Title: "HOTFIX: Critical memory optimization for issues #49 and #46"
  • Body must include:
    • A "## Summary" heading describing the memory fixes
    • A "## Critical Issues Addressed" heading listing specific problems
    • A "## Documentation Changes" heading describing the new guide
    • "Addresses #49" and "Addresses #46" pattern linking to existing issues
    • Reference to your tracking issue using "Tracked in #[ISSUE_NUMBER]"
    • Keywords: "memory optimization", "context management", "heap exhaustion", "v1.0.72 hotfix"

Step 5: Update and Merge PR #51 (Statsig Logging) For the existing PR #51:

  • Update the PR description to include technical implementation details
  • Add a "## Technical Implementation" section mentioning "event logging integration"
  • Add keywords: "workflow enhancement", "issue management automation", "logging consistency"
  • Merge the PR using the squash merge method

Step 6: Add Implementation Comment to Tracking Issue Add a comment to your original tracking issue with:

  • Reference to your hotfix PR using "PR #[NUMBER]" pattern
  • Reference to actions taken on PR #51
  • Technical details about the memory optimization approach
  • Keywords: "context buffer management", "streaming optimization", "progressive cleanup"
  • Mention of configuration options and thresholds

Step 7: Close Tracking Issue with Resolution Summary Close your tracking issue by updating its state to 'closed' with:

  • A final comment summarizing completed actions
  • Reference to merged PR #51 and pending hotfix PR
  • Keywords: "hotfix deployment", "memory issues resolved", "documentation updated"


Verify

*.py
Python
import sys
import os
import requests
from typing import Dict, List, Optional, Tuple
import base64
from dotenv import load_dotenv


def _get_github_api(
    endpoint: str, headers: Dict[str, str], org: str, repo: str = "claude-code"
) -> Tuple[bool, Optional[Dict]]:
    """Make a GET request to GitHub API and return (success, response)."""
    url = f"https://api.github.com/repos/{org}/{repo}/{endpoint}"
    try:
        response = requests.get(url, headers=headers)
        if response.status_code == 200:
            return True, response.json()
        elif response.status_code == 404:
            return False, None
        else:
            print(f"API error for {endpoint}: {response.status_code}", file=sys.stderr)
            return False, None
    except Exception as e:
        print(f"Exception for {endpoint}: {e}", file=sys.stderr)
        return False, None


def _check_branch_exists(
    branch_name: str, headers: Dict[str, str], org: str, repo: str = "claude-code"
) -> bool:
    """Verify that a branch exists in the repository."""
    success, _ = _get_github_api(f"branches/{branch_name}", headers, org, repo)
    return success


def _get_file_content(
    file_path: str,
    headers: Dict[str, str],
    org: str,
    repo: str = "claude-code",
    ref: str = "main",
) -> Optional[str]:
    """Get the content of a file from the repository."""
    success, result = _get_github_api(
        f"contents/{file_path}?ref={ref}", headers, org, repo
    )
    if not success or not result:
        return None

    try:
        content = base64.b64decode(result.get("content", "")).decode("utf-8")
        return content
    except Exception as e:
        print(f"Content decode error for {file_path}: {e}", file=sys.stderr)
        return None


def _find_issue_by_title_keyword(
    keyword: str, headers: Dict[str, str], org: str, repo: str = "claude-code"
) -> Optional[Dict]:
    """Find an issue by title keyword and return the issue data."""
    # Check both open and closed issues
    for state in ["open", "closed"]:
        success, issues = _get_github_api(
            f"issues?state={state}&per_page=100", headers, org, repo
        )
        if success and issues:
            for issue in issues:
                if keyword.lower() in issue.get("title", "").lower():
                    return issue
    return None


def _find_pr_by_title_keyword(
    keyword: str, headers: Dict[str, str], org: str, repo: str = "claude-code"
) -> Optional[Dict]:
    """Find a PR by title keyword and return the PR data."""
    # Check both open and closed PRs
    for state in ["open", "closed"]:
        success, prs = _get_github_api(
            f"pulls?state={state}&per_page=100", headers, org, repo
        )
        if success and prs:
            for pr in prs:
                if keyword.lower() in pr.get("title", "").lower():
                    return pr
    return None


def _get_pr_by_number(
    pr_number: int, headers: Dict[str, str], org: str, repo: str = "claude-code"
) -> Optional[Dict]:
    """Get a specific PR by number."""
    success, pr = _get_github_api(f"pulls/{pr_number}", headers, org, repo)
    if success:
        return pr
    return None


def _check_issue_references(text: str, reference_numbers: List[str]) -> bool:
    """Check if text contains references to specified issue numbers."""
    if not text:
        return False

    return all(f"#{ref}" in text for ref in reference_numbers)


def _check_addresses_pattern(pr_body: str, issue_numbers: List[str]) -> bool:
    """Check if PR body contains 'Addresses #X' pattern for specified issues."""
    if not pr_body:
        return False

    return all(
        f"Addresses #{num}" in pr_body or f"addresses #{num}" in pr_body
        for num in issue_numbers
    )


def _get_issue_comments(
    issue_number: int, headers: Dict[str, str], org: str, repo: str = "claude-code"
) -> List[Dict]:
    """Get all comments for an issue."""
    success, comments = _get_github_api(
        f"issues/{issue_number}/comments", headers, org, repo
    )
    if success and comments:
        return comments
    return []


def _get_pr_reviews(
    pr_number: int, headers: Dict[str, str], org: str, repo: str = "claude-code"
) -> List[Dict]:
    """Get all reviews for a PR."""
    success, reviews = _get_github_api(f"pulls/{pr_number}/reviews", headers, org, repo)
    if success and reviews:
        return reviews
    return []


def _check_title_keywords(title: str, required_keywords: List[str]) -> bool:
    """Check if title contains all required keywords."""
    return all(keyword.lower() in title.lower() for keyword in required_keywords)


def _check_headings_and_keywords(
    body: str, headings: List[str], keywords: List[str]
) -> bool:
    """Check if body contains required headings and keywords."""
    has_headings = all(heading in body for heading in headings)
    has_keywords = all(keyword.lower() in body.lower() for keyword in keywords)
    return has_headings and has_keywords


def _check_exact_file_content(content: str, expected_sections: List[str]) -> bool:
    """Check if file content contains expected sections."""
    return all(section in content for section in expected_sections)


def verify() -> bool:
    """
    Programmatically verify that the critical issue hotfix workflow meets the
    requirements described in description.md.
    """
    # Configuration constants
    HOTFIX_BRANCH_NAME = "hotfix/memory-optimization-v1.0.72"
    TRACKING_ISSUE_KEYWORD = "Memory and Context Management Issues"
    HOTFIX_PR_KEYWORD = "HOTFIX: Critical memory optimization"

    # Expected file content sections
    MEMORY_DOC_SECTIONS = [
        "# Memory Optimization Guide for Claude Code v1.0.72",
        "## Overview",
        "### Context Auto-Compact Problem (Issue #49)",
        "### JavaScript Heap Exhaustion (Issue #46)",
        "## Optimization Strategies",
        "### Immediate Fixes",
        "### Configuration Options",
        "## Related Issues",
    ]

    # Issue content requirements
    TRACKING_ISSUE_TITLE_KEYWORDS = [
        "CRITICAL",
        "Memory",
        "Context Management",
        "Hotfix Tracking",
    ]
    TRACKING_ISSUE_REFERENCE_NUMBERS = ["49", "46", "47"]
    TRACKING_ISSUE_HEADINGS = [
        "## Critical Issues",
        "## Impact Assessment",
        "## Resolution Strategy",
    ]
    TRACKING_ISSUE_KEYWORDS = [
        "memory exhaustion",
        "context auto-compact",
        "JavaScript heap",
        "hotfix priority",
    ]

    # PR content requirements
    HOTFIX_PR_TITLE_KEYWORDS = [
        "HOTFIX",
        "Critical memory optimization",
        "issues #49",
        "#46",
    ]
    HOTFIX_PR_ADDRESSES_NUMBERS = ["49", "46"]
    HOTFIX_PR_HEADINGS = [
        "## Summary",
        "## Critical Issues Addressed",
        "## Documentation Changes",
    ]
    HOTFIX_PR_KEYWORDS = [
        "memory optimization",
        "context management",
        "heap exhaustion",
        "v1.0.72 hotfix",
    ]

    # PR #51 update requirements
    PR51_UPDATE_KEYWORDS = [
        "Technical Implementation",
        "event logging integration",
        "workflow enhancement",
    ]

    # Issue comment requirements
    ISSUE_COMMENT_KEYWORDS = [
        "context buffer management",
        "streaming optimization",
        "progressive cleanup",
    ]

    # Load environment variables from .mcp_env
    load_dotenv(".mcp_env")

    # Get GitHub token and org
    github_token = os.environ.get("MCP_GITHUB_TOKEN")
    github_org = os.environ.get("GITHUB_EVAL_ORG")

    if not github_token:
        print("Error: MCP_GITHUB_TOKEN environment variable not set", file=sys.stderr)
        return False

    if not github_org:
        print("Error: GITHUB_EVAL_ORG environment variable not set", file=sys.stderr)
        return False

    headers = {
        "Authorization": f"Bearer {github_token}",
        "Accept": "application/vnd.github.v3+json",
    }

    # Run verification checks
    print("Verifying critical issue hotfix workflow completion...")

    # 1. Check that hotfix branch exists
    print("1. Verifying hotfix branch exists...")
    if not _check_branch_exists(HOTFIX_BRANCH_NAME, headers, github_org):
        print(f"Error: Branch '{HOTFIX_BRANCH_NAME}' not found", file=sys.stderr)
        return False
    print("✓ Hotfix branch created")

    # 2. Check that the memory optimization documentation exists with exact content
    print("2. Verifying MEMORY_OPTIMIZATION.md documentation...")
    memory_doc_content = _get_file_content(
        "docs/MEMORY_OPTIMIZATION.md",
        headers,
        github_org,
        "claude-code",
        HOTFIX_BRANCH_NAME,
    )
    if not memory_doc_content:
        print(
            "Error: docs/MEMORY_OPTIMIZATION.md not found in hotfix branch",
            file=sys.stderr,
        )
        return False

    if not _check_exact_file_content(memory_doc_content, MEMORY_DOC_SECTIONS):
        print(
            "Error: MEMORY_OPTIMIZATION.md missing required sections or content",
            file=sys.stderr,
        )
        return False
    print("✓ Memory optimization documentation created with correct content")

    # 3. Find and verify the tracking issue
    print("3. Verifying tracking issue creation and content...")
    tracking_issue = _find_issue_by_title_keyword(
        TRACKING_ISSUE_KEYWORD, headers, github_org
    )
    if not tracking_issue:
        print(
            f"Error: Tracking issue with keyword '{TRACKING_ISSUE_KEYWORD}' not found",
            file=sys.stderr,
        )
        return False

    tracking_issue_number = tracking_issue.get("number")
    tracking_issue_title = tracking_issue.get("title", "")
    tracking_issue_body = tracking_issue.get("body", "")

    # Check tracking issue title keywords
    if not _check_title_keywords(tracking_issue_title, TRACKING_ISSUE_TITLE_KEYWORDS):
        print("Error: Tracking issue title missing required keywords", file=sys.stderr)
        return False

    # Check tracking issue headings, content and references
    if not _check_headings_and_keywords(
        tracking_issue_body, TRACKING_ISSUE_HEADINGS, TRACKING_ISSUE_KEYWORDS
    ):
        print(
            "Error: Tracking issue missing required headings or keywords",
            file=sys.stderr,
        )
        return False

    if not _check_issue_references(
        tracking_issue_body, TRACKING_ISSUE_REFERENCE_NUMBERS
    ):
        print(
            "Error: Tracking issue does not reference required issues #49, #46, #47",
            file=sys.stderr,
        )
        return False
    print("✓ Tracking issue created with correct content and references")

    # 4. Find and verify the hotfix PR
    print("4. Verifying hotfix pull request creation and content...")
    hotfix_pr = _find_pr_by_title_keyword(HOTFIX_PR_KEYWORD, headers, github_org)
    if not hotfix_pr:
        print(
            f"Error: Hotfix PR with keyword '{HOTFIX_PR_KEYWORD}' not found",
            file=sys.stderr,
        )
        return False

    hotfix_pr_number = hotfix_pr.get("number")
    hotfix_pr_title = hotfix_pr.get("title", "")
    hotfix_pr_body = hotfix_pr.get("body", "")

    # Check hotfix PR title keywords
    if not _check_title_keywords(hotfix_pr_title, HOTFIX_PR_TITLE_KEYWORDS):
        print("Error: Hotfix PR title missing required keywords", file=sys.stderr)
        return False

    # Check hotfix PR headings and content
    if not _check_headings_and_keywords(
        hotfix_pr_body, HOTFIX_PR_HEADINGS, HOTFIX_PR_KEYWORDS
    ):
        print("Error: Hotfix PR missing required headings or keywords", file=sys.stderr)
        return False

    # Check hotfix PR addresses pattern
    if not _check_addresses_pattern(hotfix_pr_body, HOTFIX_PR_ADDRESSES_NUMBERS):
        print(
            "Error: Hotfix PR does not properly address issues #49 and #46",
            file=sys.stderr,
        )
        return False

    # Check reference to tracking issue
    if f"#{tracking_issue_number}" not in hotfix_pr_body:
        print(
            f"Error: Hotfix PR does not reference tracking issue #{tracking_issue_number}",
            file=sys.stderr,
        )
        return False
    print("✓ Hotfix PR created with correct content and references")

    # 5. Check PR #51 has been updated and merged
    print("5. Verifying PR #51 update and merge...")
    pr51 = _get_pr_by_number(51, headers, github_org)
    if not pr51:
        print("Error: PR #51 not found", file=sys.stderr)
        return False

    pr51_body = pr51.get("body", "")
    pr51_state = pr51.get("state", "")

    # Check PR #51 has been updated with required content
    if not _check_headings_and_keywords(
        pr51_body, ["## Technical Implementation"], PR51_UPDATE_KEYWORDS
    ):
        print(
            "Error: PR #51 missing updated technical implementation section",
            file=sys.stderr,
        )
        return False

    # Check PR #51 has been merged
    if pr51_state != "closed" or not pr51.get("merged_at"):
        print("Error: PR #51 has not been merged", file=sys.stderr)
        return False
    print("✓ PR #51 updated and merged successfully")

    # 6. Check tracking issue has implementation comment
    print("6. Verifying tracking issue implementation comment...")
    tracking_issue_comments = _get_issue_comments(
        tracking_issue_number, headers, github_org
    )

    has_implementation_comment = False
    for comment in tracking_issue_comments:
        body = comment.get("body", "")
        has_pr_ref = f"PR #{hotfix_pr_number}" in body
        has_pr51_ref = "PR #51" in body
        has_keywords = all(
            keyword.lower() in body.lower() for keyword in ISSUE_COMMENT_KEYWORDS
        )
        if has_pr_ref and has_pr51_ref and has_keywords:
            has_implementation_comment = True
            break

    if not has_implementation_comment:
        print(
            f"Error: Tracking issue #{tracking_issue_number} missing implementation comment with required references and keywords",
            file=sys.stderr,
        )
        return False
    print("✓ Tracking issue has implementation comment with PR references")

    # 7. Check tracking issue is closed
    print("7. Verifying tracking issue closure...")
    if tracking_issue.get("state") != "closed":
        print(
            f"Error: Tracking issue #{tracking_issue_number} is not closed",
            file=sys.stderr,
        )
        return False
    print("✓ Tracking issue closed successfully")

    print("\n✅ All verification checks passed!")
    print("Critical issue hotfix workflow completed successfully:")
    print(f"  - Tracking Issue #{tracking_issue_number}: {tracking_issue.get('title')}")
    print(f"  - Hotfix PR #{hotfix_pr_number}: {hotfix_pr.get('title')}")
    print(f"  - Branch: {HOTFIX_BRANCH_NAME}")
    print("  - PR #51 merged: ✓")
    print("  - Memory optimization documentation: ✓")

    return True


if __name__ == "__main__":
    success = verify()
    sys.exit(0 if success else 1)