Joseph Pollack commited on
Commit
05d4430
·
unverified ·
1 Parent(s): cb48bd4

fix interface

Browse files
.github/README.md CHANGED
@@ -1,7 +1,16 @@
 
 
 
 
 
 
 
 
 
1
  <div align="center">
2
 
3
  [![GitHub](https://img.shields.io/github/stars/DeepCritical/GradioDemo?style=for-the-badge&logo=github&logoColor=white&label=🐙%20GitHub&labelColor=181717&color=181717)](https://github.com/DeepCritical/GradioDemo)
4
- [![Documentation](https://img.shields.io/badge/📚%20Docs-0080FF?style=for-the-badge&logo=readthedocs&logoColor=white&labelColor=0080FF&color=0080FF)](docs/index.md)
5
  [![Demo](https://img.shields.io/badge/🚀%20Demo-FFD21E?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=FFD21E&color=FFD21E)](https://huggingface.co/spaces/DataQuests/DeepCritical)
6
  [![codecov](https://codecov.io/gh/DeepCritical/GradioDemo/graph/badge.svg?token=B1f05RCGpz)](https://codecov.io/gh/DeepCritical/GradioDemo)
7
  [![Join us on Discord](https://img.shields.io/discord/1109943800132010065?label=Discord&logo=discord&style=flat-square)](https://discord.gg/qdfnvSPcqP)
@@ -24,7 +33,7 @@ uv sync --all-extras
24
 
25
  ```bash
26
  # Start the Gradio app
27
- uv run gradio gradio src/app.py
28
  ```
29
 
30
  Open your browser to `http://localhost:7860`.
 
1
+
2
+ > [!IMPORTANT]
3
+ > **You are reading the Github README!**
4
+ >
5
+ > - 📚 **Documentation**: See our [technical documentation](https://deepcritical.github.io/GradioDemo/) for detailed information
6
+ > - 📖 **Demo README**: Check out the [Demo README](..README.md) for setup, configuration, and contribution guidelines
7
+ > - 🏆 **Hackathon Submission**: Keep reading below for more information about our MCP Hackathon submission
8
+
9
+
10
  <div align="center">
11
 
12
  [![GitHub](https://img.shields.io/github/stars/DeepCritical/GradioDemo?style=for-the-badge&logo=github&logoColor=white&label=🐙%20GitHub&labelColor=181717&color=181717)](https://github.com/DeepCritical/GradioDemo)
13
+ [![Documentation](https://img.shields.io/badge/Docs-0080FF?style=for-the-badge&logo=readthedocs&logoColor=white&labelColor=0080FF&color=0080FF)](deepcritical.github.io/GradioDemo/)
14
  [![Demo](https://img.shields.io/badge/🚀%20Demo-FFD21E?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=FFD21E&color=FFD21E)](https://huggingface.co/spaces/DataQuests/DeepCritical)
15
  [![codecov](https://codecov.io/gh/DeepCritical/GradioDemo/graph/badge.svg?token=B1f05RCGpz)](https://codecov.io/gh/DeepCritical/GradioDemo)
16
  [![Join us on Discord](https://img.shields.io/discord/1109943800132010065?label=Discord&logo=discord&style=flat-square)](https://discord.gg/qdfnvSPcqP)
 
33
 
34
  ```bash
35
  # Start the Gradio app
36
+ gradio run "src/app.py"
37
  ```
38
 
39
  Open your browser to `http://localhost:7860`.
.github/workflows/ci.yml CHANGED
@@ -44,12 +44,14 @@ jobs:
44
  - name: Run unit tests (No OpenAI/Anthropic, HuggingFace only)
45
  env:
46
  HF_TOKEN: ${{ secrets.HF_TOKEN }}
 
47
  run: |
48
  uv run pytest tests/unit/ -v -m "not openai and not anthropic and not embedding_provider" --tb=short -p no:logfire --cov --cov-branch --cov-report=xml
49
 
50
  - name: Run local embeddings tests
51
  env:
52
  HF_TOKEN: ${{ secrets.HF_TOKEN }}
 
53
  run: |
54
  uv run pytest tests/ -v -m "local_embeddings" --tb=short -p no:logfire --cov --cov-branch --cov-report=xml --cov-append || true
55
  continue-on-error: true # Allow failures if dependencies not available
@@ -57,6 +59,7 @@ jobs:
57
  - name: Run HuggingFace integration tests
58
  env:
59
  HF_TOKEN: ${{ secrets.HF_TOKEN }}
 
60
  run: |
61
  uv run pytest tests/integration/ -v -m "huggingface and not embedding_provider" --tb=short -p no:logfire --cov --cov-branch --cov-report=xml --cov-append || true
62
  continue-on-error: true # Allow failures if HF_TOKEN not set
@@ -64,6 +67,7 @@ jobs:
64
  - name: Run non-OpenAI/Anthropic integration tests (excluding embedding providers)
65
  env:
66
  HF_TOKEN: ${{ secrets.HF_TOKEN }}
 
67
  run: |
68
  uv run pytest tests/integration/ -v -m "integration and not openai and not anthropic and not embedding_provider" --tb=short -p no:logfire --cov --cov-branch --cov-report=xml --cov-append || true
69
  continue-on-error: true # Allow failures if dependencies not available
 
44
  - name: Run unit tests (No OpenAI/Anthropic, HuggingFace only)
45
  env:
46
  HF_TOKEN: ${{ secrets.HF_TOKEN }}
47
+ LLM_PROVIDER: huggingface
48
  run: |
49
  uv run pytest tests/unit/ -v -m "not openai and not anthropic and not embedding_provider" --tb=short -p no:logfire --cov --cov-branch --cov-report=xml
50
 
51
  - name: Run local embeddings tests
52
  env:
53
  HF_TOKEN: ${{ secrets.HF_TOKEN }}
54
+ LLM_PROVIDER: huggingface
55
  run: |
56
  uv run pytest tests/ -v -m "local_embeddings" --tb=short -p no:logfire --cov --cov-branch --cov-report=xml --cov-append || true
57
  continue-on-error: true # Allow failures if dependencies not available
 
59
  - name: Run HuggingFace integration tests
60
  env:
61
  HF_TOKEN: ${{ secrets.HF_TOKEN }}
62
+ LLM_PROVIDER: huggingface
63
  run: |
64
  uv run pytest tests/integration/ -v -m "huggingface and not embedding_provider" --tb=short -p no:logfire --cov --cov-branch --cov-report=xml --cov-append || true
65
  continue-on-error: true # Allow failures if HF_TOKEN not set
 
67
  - name: Run non-OpenAI/Anthropic integration tests (excluding embedding providers)
68
  env:
69
  HF_TOKEN: ${{ secrets.HF_TOKEN }}
70
+ LLM_PROVIDER: huggingface
71
  run: |
72
  uv run pytest tests/integration/ -v -m "integration and not openai and not anthropic and not embedding_provider" --tb=short -p no:logfire --cov --cov-branch --cov-report=xml --cov-append || true
73
  continue-on-error: true # Allow failures if dependencies not available
README.md CHANGED
@@ -26,14 +26,14 @@ tags:
26
  > [!IMPORTANT]
27
  > **You are reading the Gradio Demo README!**
28
  >
29
- > - 📚 **Documentation**: See our [technical documentation](docs/index.md) for detailed information
30
  > - 📖 **Complete README**: Check out the [full README](.github/README.md) for setup, configuration, and contribution guidelines
31
  > - 🏆 **Hackathon Submission**: Keep reading below for more information about our MCP Hackathon submission
32
 
33
  <div align="center">
34
 
35
  [![GitHub](https://img.shields.io/github/stars/DeepCritical/GradioDemo?style=for-the-badge&logo=github&logoColor=white&label=🐙%20GitHub&labelColor=181717&color=181717)](https://github.com/DeepCritical/GradioDemo)
36
- [![Documentation](https://img.shields.io/badge/📚%20Docs-0080FF?style=for-the-badge&logo=readthedocs&logoColor=white&labelColor=0080FF&color=0080FF)](docs/index.md)
37
  [![Demo](https://img.shields.io/badge/🚀%20Demo-FFD21E?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=FFD21E&color=FFD21E)](https://huggingface.co/spaces/DataQuests/DeepCritical)
38
  [![codecov](https://codecov.io/gh/DeepCritical/GradioDemo/graph/badge.svg?token=B1f05RCGpz)](https://codecov.io/gh/DeepCritical/GradioDemo)
39
  [![Join us on Discord](https://img.shields.io/discord/1109943800132010065?label=Discord&logo=discord&style=flat-square)](https://discord.gg/qdfnvSPcqP)
@@ -63,7 +63,7 @@ For this hackathon we're proposing a simple yet powerful Deep Research Agent tha
63
  ## Important information
64
 
65
  - **[readme](.github\README.md)**: configure, deploy , contribute and learn more here.
66
- - **[docs]**: want to know how all this works ? read our detailed technical documentation here.
67
  - **[demo](https://huggingface/spaces/DataQuests/DeepCritical)**: Try our demo on huggingface
68
  - **[team](### Team)**: Join us , or follow us !
69
  - **[video]**: See our demo video
@@ -117,7 +117,7 @@ For this hackathon we're proposing a simple yet powerful Deep Research Agent tha
117
  ## Links
118
 
119
  [![GitHub](https://img.shields.io/github/stars/DeepCritical/GradioDemo?style=for-the-badge&logo=github&logoColor=white&label=🐙%20GitHub&labelColor=181717&color=181717)](https://github.com/DeepCritical/GradioDemo)
120
- [![Documentation](https://img.shields.io/badge/📚%20Docs-0080FF?style=for-the-badge&logo=readthedocs&logoColor=white&labelColor=0080FF&color=0080FF)](docs/index.md)
121
  [![Demo](https://img.shields.io/badge/🚀%20Demo-FFD21E?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=FFD21E&color=FFD21E)](https://huggingface.co/spaces/DataQuests/DeepCritical)
122
  [![codecov](https://codecov.io/gh/DeepCritical/GradioDemo/graph/badge.svg?token=B1f05RCGpz)](https://codecov.io/gh/DeepCritical/GradioDemo)
123
  [![Join us on Discord](https://img.shields.io/discord/1109943800132010065?label=Discord&logo=discord&style=flat-square)](https://discord.gg/qdfnvSPcqP)
 
26
  > [!IMPORTANT]
27
  > **You are reading the Gradio Demo README!**
28
  >
29
+ > - 📚 **Documentation**: See our [technical documentation](deepcritical.github.io/GradioDemo/) for detailed information
30
  > - 📖 **Complete README**: Check out the [full README](.github/README.md) for setup, configuration, and contribution guidelines
31
  > - 🏆 **Hackathon Submission**: Keep reading below for more information about our MCP Hackathon submission
32
 
33
  <div align="center">
34
 
35
  [![GitHub](https://img.shields.io/github/stars/DeepCritical/GradioDemo?style=for-the-badge&logo=github&logoColor=white&label=🐙%20GitHub&labelColor=181717&color=181717)](https://github.com/DeepCritical/GradioDemo)
36
+ [![Documentation](https://img.shields.io/badge/📚%20Docs-0080FF?style=for-the-badge&logo=readthedocs&logoColor=white&labelColor=0080FF&color=0080FF)](deepcritical.github.io/GradioDemo/)
37
  [![Demo](https://img.shields.io/badge/🚀%20Demo-FFD21E?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=FFD21E&color=FFD21E)](https://huggingface.co/spaces/DataQuests/DeepCritical)
38
  [![codecov](https://codecov.io/gh/DeepCritical/GradioDemo/graph/badge.svg?token=B1f05RCGpz)](https://codecov.io/gh/DeepCritical/GradioDemo)
39
  [![Join us on Discord](https://img.shields.io/discord/1109943800132010065?label=Discord&logo=discord&style=flat-square)](https://discord.gg/qdfnvSPcqP)
 
63
  ## Important information
64
 
65
  - **[readme](.github\README.md)**: configure, deploy , contribute and learn more here.
66
+ - **[docs](deepcritical.github.io/GradioDemo/)**: want to know how all this works ? read our detailed technical documentation here.
67
  - **[demo](https://huggingface/spaces/DataQuests/DeepCritical)**: Try our demo on huggingface
68
  - **[team](### Team)**: Join us , or follow us !
69
  - **[video]**: See our demo video
 
117
  ## Links
118
 
119
  [![GitHub](https://img.shields.io/github/stars/DeepCritical/GradioDemo?style=for-the-badge&logo=github&logoColor=white&label=🐙%20GitHub&labelColor=181717&color=181717)](https://github.com/DeepCritical/GradioDemo)
120
+ [![Documentation](https://img.shields.io/badge/📚%20Docs-0080FF?style=for-the-badge&logo=readthedocs&logoColor=white&labelColor=0080FF&color=0080FF)](deepcritical.github.io/GradioDemo/)
121
  [![Demo](https://img.shields.io/badge/🚀%20Demo-FFD21E?style=for-the-badge&logo=huggingface&logoColor=white&labelColor=FFD21E&color=FFD21E)](https://huggingface.co/spaces/DataQuests/DeepCritical)
122
  [![codecov](https://codecov.io/gh/DeepCritical/GradioDemo/graph/badge.svg?token=B1f05RCGpz)](https://codecov.io/gh/DeepCritical/GradioDemo)
123
  [![Join us on Discord](https://img.shields.io/discord/1109943800132010065?label=Discord&logo=discord&style=flat-square)](https://discord.gg/qdfnvSPcqP)
src/agent_factory/judges.py CHANGED
@@ -54,16 +54,25 @@ def get_model() -> Any:
54
 
55
  Explicitly passes API keys from settings to avoid requiring
56
  users to export environment variables manually.
 
 
 
57
  """
58
  llm_provider = settings.llm_provider
59
 
60
  if llm_provider == "anthropic":
61
  if not _ANTHROPIC_AVAILABLE:
62
- raise ImportError(
63
- "Anthropic models are not available. "
64
- "Please install with: uv add 'pydantic-ai[anthropic]' or use 'openai'/'huggingface' as the LLM provider."
65
- )
66
- return AnthropicModel(settings.anthropic_model, api_key=settings.anthropic_api_key) # type: ignore[call-arg]
 
 
 
 
 
 
67
 
68
  if llm_provider == "huggingface":
69
  if not _HUGGINGFACE_AVAILABLE:
@@ -80,12 +89,9 @@ def get_model() -> Any:
80
  provider = HuggingFaceProvider(hf_client=hf_client) # type: ignore[misc]
81
  return HuggingFaceModel(model_name, provider=provider) # type: ignore[misc]
82
 
83
- if llm_provider == "openai":
84
- return OpenAIModel(settings.openai_model, api_key=settings.openai_api_key) # type: ignore[call-overload]
85
-
86
- # Default to HuggingFace if provider is unknown or not specified
87
  if llm_provider != "huggingface":
88
- logger.warning("Unknown LLM provider, defaulting to HuggingFace", provider=llm_provider)
89
 
90
  if not _HUGGINGFACE_AVAILABLE:
91
  raise ImportError(
 
54
 
55
  Explicitly passes API keys from settings to avoid requiring
56
  users to export environment variables manually.
57
+
58
+ Falls back to HuggingFace if the configured provider's API key is missing,
59
+ which is important for CI/testing environments.
60
  """
61
  llm_provider = settings.llm_provider
62
 
63
  if llm_provider == "anthropic":
64
  if not _ANTHROPIC_AVAILABLE:
65
+ logger.warning("Anthropic not available, falling back to HuggingFace")
66
+ elif settings.anthropic_api_key:
67
+ return AnthropicModel(settings.anthropic_model, api_key=settings.anthropic_api_key) # type: ignore[call-arg]
68
+ else:
69
+ logger.warning("ANTHROPIC_API_KEY not set, falling back to HuggingFace")
70
+
71
+ if llm_provider == "openai":
72
+ if settings.openai_api_key:
73
+ return OpenAIModel(settings.openai_model, api_key=settings.openai_api_key) # type: ignore[call-overload]
74
+ else:
75
+ logger.warning("OPENAI_API_KEY not set, falling back to HuggingFace")
76
 
77
  if llm_provider == "huggingface":
78
  if not _HUGGINGFACE_AVAILABLE:
 
89
  provider = HuggingFaceProvider(hf_client=hf_client) # type: ignore[misc]
90
  return HuggingFaceModel(model_name, provider=provider) # type: ignore[misc]
91
 
92
+ # Default to HuggingFace if provider is unknown or not specified, or if API key is missing
 
 
 
93
  if llm_provider != "huggingface":
94
+ logger.warning("Unknown LLM provider or missing API key, defaulting to HuggingFace", provider=llm_provider)
95
 
96
  if not _HUGGINGFACE_AVAILABLE:
97
  raise ImportError(
src/utils/config.py CHANGED
@@ -24,7 +24,7 @@ class Settings(BaseSettings):
24
  openai_api_key: str | None = Field(default=None, description="OpenAI API key")
25
  anthropic_api_key: str | None = Field(default=None, description="Anthropic API key")
26
  llm_provider: Literal["openai", "anthropic", "huggingface"] = Field(
27
- default="openai", description="Which LLM provider to use"
28
  )
29
  openai_model: str = Field(default="gpt-5.1", description="OpenAI model name")
30
  anthropic_model: str = Field(
 
24
  openai_api_key: str | None = Field(default=None, description="OpenAI API key")
25
  anthropic_api_key: str | None = Field(default=None, description="Anthropic API key")
26
  llm_provider: Literal["openai", "anthropic", "huggingface"] = Field(
27
+ default="huggingface", description="Which LLM provider to use (defaults to HuggingFace for CI/testing)"
28
  )
29
  openai_model: str = Field(default="gpt-5.1", description="OpenAI model name")
30
  anthropic_model: str = Field(
tests/conftest.py CHANGED
@@ -1,6 +1,7 @@
1
  """Shared pytest fixtures for all tests."""
2
 
3
- from unittest.mock import AsyncMock
 
4
 
5
  import pytest
6
 
@@ -62,3 +63,19 @@ def integration_test_timeout():
62
  # This fixture runs automatically for all tests
63
  # Individual tests can override with asyncio.wait_for
64
  pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  """Shared pytest fixtures for all tests."""
2
 
3
+ import os
4
+ from unittest.mock import AsyncMock, patch
5
 
6
  import pytest
7
 
 
63
  # This fixture runs automatically for all tests
64
  # Individual tests can override with asyncio.wait_for
65
  pass
66
+
67
+
68
+ @pytest.fixture(autouse=True)
69
+ def default_to_huggingface(monkeypatch):
70
+ """Ensure tests default to HuggingFace provider unless explicitly overridden.
71
+
72
+ This prevents tests from requiring OpenAI/Anthropic API keys.
73
+ Tests can override by setting LLM_PROVIDER in their environment or mocking settings.
74
+ """
75
+ # Only set if not already set (allows tests to override)
76
+ if "LLM_PROVIDER" not in os.environ:
77
+ monkeypatch.setenv("LLM_PROVIDER", "huggingface")
78
+
79
+ # Set a dummy HF_TOKEN if not set (prevents errors, but tests should mock actual API calls)
80
+ if "HF_TOKEN" not in os.environ:
81
+ monkeypatch.setenv("HF_TOKEN", "dummy_token_for_testing")
tests/unit/orchestrator/test_graph_orchestrator.py CHANGED
@@ -210,7 +210,18 @@ class TestGraphOrchestrator:
210
 
211
  # Create flow and patch its run method to raise exception
212
  mock_judge = MagicMock()
213
- with patch("src.orchestrator.research_flow.create_judge_handler", return_value=mock_judge):
 
 
 
 
 
 
 
 
 
 
 
214
  original_flow = IterativeResearchFlow(
215
  max_iterations=2,
216
  max_time_minutes=5,
 
210
 
211
  # Create flow and patch its run method to raise exception
212
  mock_judge = MagicMock()
213
+ with (
214
+ patch("src.orchestrator.research_flow.create_judge_handler", return_value=mock_judge),
215
+ patch("src.orchestrator.research_flow.create_knowledge_gap_agent") as mock_kg,
216
+ patch("src.orchestrator.research_flow.create_tool_selector_agent") as mock_ts,
217
+ patch("src.orchestrator.research_flow.create_thinking_agent") as mock_thinking,
218
+ patch("src.orchestrator.research_flow.create_writer_agent") as mock_writer,
219
+ ):
220
+ # Mock all agents to avoid needing API keys
221
+ mock_kg.return_value = MagicMock()
222
+ mock_ts.return_value = MagicMock()
223
+ mock_thinking.return_value = MagicMock()
224
+ mock_writer.return_value = MagicMock()
225
  original_flow = IterativeResearchFlow(
226
  max_iterations=2,
227
  max_time_minutes=5,