Spaces:
Running
Running
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
DeepCritical is an AI-native drug repurposing research agent for a HuggingFace hackathon. It uses a search-and-judge loop to autonomously search biomedical databases (PubMed, ClinicalTrials.gov, bioRxiv) and synthesize evidence for queries like "What existing drugs might help treat long COVID fatigue?".
Current Status: Phases 1-13 COMPLETE (Foundation through Modal sandbox integration).
Development Commands
# Install all dependencies (including dev)
make install # or: uv sync --all-extras && uv run pre-commit install
# Run all quality checks (lint + typecheck + test) - MUST PASS BEFORE COMMIT
make check
# Individual commands
make test # uv run pytest tests/unit/ -v
make lint # uv run ruff check src tests
make format # uv run ruff format src tests
make typecheck # uv run mypy src
make test-cov # uv run pytest --cov=src --cov-report=term-missing
# Run single test
uv run pytest tests/unit/utils/test_config.py::TestSettings::test_default_max_iterations -v
# Integration tests (real APIs)
uv run pytest -m integration
Architecture
Pattern: Search-and-judge loop with multi-tool orchestration.
User Question β Orchestrator
β
Search Loop:
1. Query PubMed, ClinicalTrials.gov, bioRxiv
2. Gather evidence
3. Judge quality ("Do we have enough?")
4. If NO β Refine query, search more
5. If YES β Synthesize findings (+ optional Modal analysis)
β
Research Report with Citations
Key Components:
src/orchestrator.py- Main agent loopsrc/tools/pubmed.py- PubMed E-utilities searchsrc/tools/clinicaltrials.py- ClinicalTrials.gov APIsrc/tools/biorxiv.py- bioRxiv/medRxiv preprint searchsrc/tools/code_execution.py- Modal sandbox executionsrc/tools/search_handler.py- Scatter-gather orchestrationsrc/services/embeddings.py- Semantic search & deduplication (ChromaDB)src/services/statistical_analyzer.py- Statistical analysis via Modalsrc/agent_factory/judges.py- LLM-based evidence assessmentsrc/agents/- Magentic multi-agent mode (SearchAgent, JudgeAgent, etc.)src/mcp_tools.py- MCP tool wrappers for Claude Desktopsrc/utils/config.py- Pydantic Settings (loads from.env)src/utils/models.py- Evidence, Citation, SearchResult modelssrc/utils/exceptions.py- Exception hierarchysrc/app.py- Gradio UI with MCP server (HuggingFace Spaces)
Break Conditions: Judge approval, token budget (50K max), or max iterations (default 10).
Configuration
Settings via pydantic-settings from .env:
LLM_PROVIDER: "openai" or "anthropic"OPENAI_API_KEY/ANTHROPIC_API_KEY: LLM keysNCBI_API_KEY: Optional, for higher PubMed rate limitsMODAL_TOKEN_ID/MODAL_TOKEN_SECRET: For Modal sandbox (optional)MAX_ITERATIONS: 1-50, default 10LOG_LEVEL: DEBUG, INFO, WARNING, ERROR
Exception Hierarchy
DeepCriticalError (base)
βββ SearchError
β βββ RateLimitError
βββ JudgeError
βββ ConfigurationError
Testing
- TDD: Write tests first in
tests/unit/, implement insrc/ - Markers:
unit,integration,slow - Mocking:
respxfor httpx,pytest-mockfor general mocking - Fixtures:
tests/conftest.pyhasmock_httpx_client,mock_llm_response
Git Workflow
main: Production-ready (GitHub)dev: Development integration (GitHub)- Remote
origin: GitHub (source of truth for PRs/code review) - Remote
huggingface-upstream: HuggingFace Spaces (deployment target)
HuggingFace Spaces Collaboration:
- Each contributor should use their own dev branch:
yourname-dev(e.g.,vcms-dev,mario-dev) - DO NOT push directly to
mainordevon HuggingFace - these can be overwritten easily - GitHub is the source of truth; HuggingFace is for deployment/demo
- Consider using git hooks to prevent accidental pushes to protected branches