benchmark string | check string | score int64 | reason string |
|---|---|---|---|
SWE-Bench-Lancer | I.d.1 | 1 | As discussed in Section 1 of the paper, the benchmark uses a set of test cases that are verified for correctness and quality by human experts. |
SWE-Bench-Lancer | I.d.2 | 0 | The benchmark does not use objective metrics to measure the quality of test cases. |
SWE-Bench-Lancer | I.f.2 | 1 | As discussed in Section 1, the end-to-end testing is designed to simulate the entire user workflow. |
SWE-Bench-Lancer | I.f.3 | 0 | The test cases use hard-coded timeouts, which may lead to non-deterministic results if the system is slow or unresponsive. |
SWE-Bench-Lancer | II.1 | 1 | The package dependencies are specified in the repository of each task. |
SWE-Bench-Lancer | II.2 | 1 | The benchmark does not require any external APIs. |
SWE-Bench-Lancer | II.3 | 1 | The benchmark does not require any external APIs. |
SWE-Bench-Lancer | II.4 | 1 | The benchmark uses docker containers to isolate the environment, and the state is cleared between runs. |
SWE-Bench-Lancer | II.5 | 0 | The agent can access the file system where the test cases are stored, which may lead to the agent accessing the ground truth information. |
SWE-Bench-Lancer | II.6 | 1 | The environment setup is static and does not change over time. |
SWE-Bench-Lancer | II.7 | 1 | The ground-truth test cases are taken from GitHub repositories, which are verified by expert developers. |
SWE-Bench-Lancer | II.8 | 1 | Each task represents a real-world software issue with a corresponding patch, which are solvable by the agent. |
SWE-Bench-Lancer | II.9 | 1 | The benchmark uses existing patches as ground truth, which can be considered as an Oracle solver. |
SWE-Bench-Lancer | II.10 | 0 | The benchmark does not handle the isolation between the agent and test cases properly. The test cases are stored not only in a file system that the agent can access, but also in a ZIP file that agent can read the directory structure and update files. |
SWE-Bench-Lancer | III.1 | 1 | The benchmark is open-sourced and available on GitHub. |
SWE-Bench-Lancer | III.2 | 1 | The benchmark provides an open-source evaluation harness for users. |
SWE-Bench-Lancer | III.3 | 1 | The benchmark maintains a private test set. |
SWE-Bench-Lancer | III.4 | 0 | The report does not discuss any measures or plans for consistent update. |
SWE-Bench-Lancer | III.5 | 1 | Such a relationship is clearly stated in Section 2 of the paper. |
SWE-Bench-Lancer | III.6 | 1 | As shown in Section 3, the benchmark is designed to evaluate the LLM model. |
SWE-Bench-Lancer | III.7 | 1 | The benchmark uses end-to-end testing to mitigate grader hacking. |
SWE-Bench-Lancer | III.8 | 1 | The benchmark discusses the potential impact of grader hacking in Section 1 and Appendix A.7. |
SWE-Bench-Lancer | III.9 | 0 | The benchmark does not include any quantitative analysis to assess the impact of grader hacking. |
SWE-Bench-Lancer | III.10 | 0 | The benchmark does not report any metrics about statistical significance. |
SWE-Bench-Lancer | III.11 | 0 | The benchmark does not provide any guidance on interpreting results with eval flaws. |
SWE-Bench-Lancer | III.12 | 0 | The benchmark does not report results of non-AI baselines. |
SWE-Bench-Lancer | III.13 | 0 | The benchmark does not report results of trivial agents. |
Bird-Bench | I.d.1 | 1 | As discussed in Section 3.4 of the paper, the validity of the database is verified by executing the ground-truth query. |
Bird-Bench | I.d.2 | 0 | The paper does not use objective metrics to measure the usefulness and completeness of the database or ground-truth queries. |
Bird-Bench | I.f.2 | 0 | The paper does not provide any information about the coverage of the database or ground-truth queries. |
Bird-Bench | I.f.3 | 1 | Executing SQL queries on a database is deterministic, and the paper does not mention any non-deterministic behavior. |
Bird-Bench | II.1 | 1 | The task instruction in Figure 9 speficies the SQL language is SQLite. |
Bird-Bench | II.2 | 1 | No external API is required for the evaluation of the benchmark. |
Bird-Bench | II.3 | 1 | No external API is required for the evaluation of the benchmark. |
Bird-Bench | II.4 | 0 | Databse file is neither opened in a read-only mode nor re-initialized between runs. This may lead to unexpected data manipulation by the agent. |
Bird-Bench | II.5 | 1 | Agent cannot access the host file system. |
Bird-Bench | II.6 | 1 | The environment setup is static and does not change over time. |
Bird-Bench | II.7 | 0 | As discussed in Section 3.4 of the paper, the correctness of the query is not fully verified, especially for the SQL queries that two annotators reach a concensus on. |
Bird-Bench | II.8 | 0 | The ambiguity of the SQL queries is not fully verified. |
Bird-Bench | II.9 | 0 | The Benchmark does not include an Oracle solver that can automatically solve all text-to-SQL tasks. |
Bird-Bench | II.10 | 1 | No vulnerabilities are found in the implementation of the benchmark. |
Bird-Bench | III.1 | 1 | The benchmark is open-sourced and available on GitHub. |
Bird-Bench | III.2 | 1 | The benchmark provides an open-source evaluation harness for users. |
Bird-Bench | III.3 | 1 | The benchmark has a private test set. |
Bird-Bench | III.4 | 0 | The benchmark does not discuss any measures or plans for consistent update. |
Bird-Bench | III.5 | 1 | It is clearly discussed in Section 2 of the paper. |
Bird-Bench | III.6 | 1 | It is clearly discussed in Section 2 of the paper. |
Bird-Bench | III.7 | 0 | No efforts are made to mitigate errors when both annotators make the same mistake. |
Bird-Bench | III.8 | 0 | The paper does not discuss the potential impact of annotation errors. |
Bird-Bench | III.9 | 0 | The paper does not analyze the quantitative impact of annotation errors. |
Bird-Bench | III.10 | 0 | The paper does not report any metrics about statistical significance. |
Bird-Bench | III.11 | 0 | The paper does not provide any guidance on interpreting results with eval flaws. |
Bird-Bench | III.12 | 1 | The paper reports the results of human experts. |
Bird-Bench | III.13 | 0 | The paper does not report the results of any trivial agents. |
Cy-Bench | I.h.1 | 1 | The specific format required for the answer is provided in the task description. |
Cy-Bench | I.h.2 | 1 | The ground truth is complex enough to prevent trivial guessing. |
Cy-Bench | II.1 | 1 | Agents are granted access to using all tools. The versions of tools can be checked by the agent. |
Cy-Bench | II.2 | 1 | The benchmark does not require any external APIs. |
Cy-Bench | II.3 | 1 | The benchmark does not require any external APIs. |
Cy-Bench | II.4 | 1 | The benchmark uses docker containers to isolate the environment, and the state is cleared between runs. |
Cy-Bench | II.5 | 1 | The agent cannot directly access the container which contains the ground truth. |
Cy-Bench | II.6 | 1 | The environment setup is static and does not change over time. |
Cy-Bench | II.7 | 1 | As shown in Section 3.3 of the paper, the ground truth is verified by human. |
Cy-Bench | II.8 | 1 | As shown in Section 3.3 of the paper, each task is verified to be solvable. |
Cy-Bench | II.9 | 1 | As shown in Section 3.3 of the paper, the benchmark includes an Oracle solver that can automatically solve all tasks. |
Cy-Bench | II.10 | 1 | No vulnerabilities are found in the implementation of the benchmark. |
Cy-Bench | III.1 | 1 | The benchmark is open-sourced and available on GitHub. |
Cy-Bench | III.2 | 1 | The benchmark provides an open-source evaluation harness for users. |
Cy-Bench | III.3 | 0 | The benchmark does not contain measures to prevent data contamination. |
Cy-Bench | III.4 | 0 | The report does not discuss plans to consistently update tasks over time. |
Cy-Bench | III.5 | 1 | Such a relationship is clearly stated in Section 1 of the paper. |
Cy-Bench | III.6 | 1 | As shown in Section 1, the benchmark is designed to evaluate both agent frameworks and LLM models. |
Cy-Bench | III.7 | 1 | Annotation flaws are mitigated by developing verifiable tasks. |
Cy-Bench | III.8 | 1 | No unavoidable flaws are identified in the benchmark. |
Cy-Bench | III.9 | 1 | No unavoidable flaws are identified in the benchmark. |
Cy-Bench | III.10 | 0 | The report does not include any metrics about statistical significance. |
Cy-Bench | III.11 | 1 | No evaluation flaws are identified in the benchmark. |
Cy-Bench | III.12 | 1 | Human peerformance is reported in Section 5 of the paper. |
Cy-Bench | III.13 | 0 | The report does not report results of trivial agents. |
SWE-Bench-Verified | I.d.1 | 1 | Test cases are directly taken from GitHub repositories, and the paper does not mention any verification process. |
SWE-Bench-Verified | I.d.2 | 0 | The paper does not use objective metrics to measure quality of test cases. |
SWE-Bench-Verified | II.1 | 1 | The versions of package dependencies are specified in the repository. |
SWE-Bench-Verified | II.2 | 1 | The benchmark does not require any external APIs. |
SWE-Bench-Verified | II.3 | 1 | The benchmark does not require any external APIs. |
SWE-Bench-Verified | II.4 | 1 | The benchmark uses docker containers to isolate the environment, and the state is cleared between runs. |
SWE-Bench-Verified | II.5 | 1 | The agent cannot access the host file system, and the ground truth is not accessible to the agent. |
SWE-Bench-Verified | II.6 | 1 | The environment setup is static and does not change over time. |
SWE-Bench-Verified | II.7 | 1 | The ground-truth patches are taken from GitHub repositories, which is verified by expert developers. |
SWE-Bench-Verified | II.8 | 1 | Each task represents a real-world GitHub issue and a corresponding pull request, which are solvable by the agent. |
SWE-Bench-Verified | II.9 | 1 | Pull requests from GitHub are used as ground truth, which can be considered as an Oracle solver. |
SWE-Bench-Verified | II.10 | 1 | No vulnerabilities are found in the implementation of the benchmark, and the evaluation process is secure. |
SWE-Bench-Verified | III.1 | 1 | The benchmark is open-sourced and available on GitHub. |
SWE-Bench-Verified | III.2 | 1 | The benchmark provides an open-source evaluation harness for users. |
SWE-Bench-Verified | III.3 | 0 | The benchmark does not discuss measures to prevent data contamination. |
SWE-Bench-Verified | III.4 | 0 | The benchmark does not discuss plans to consistently update tasks over time. |
SWE-Bench-Verified | III.5 | 1 | Such a relationship is clearly stated in Section 2 of the paper. |
SWE-Bench-Verified | III.6 | 1 | The benchmark is designed to evaluate both the model and the agent framework, as discussed in Section 5 of the paper. |
SWE-Bench-Verified | III.7 | 0 | The benchmark does not discuss any efforts to prevent, identify, and correct flaws. |
SWE-Bench-Verified | III.8 | 0 | The benchmark does not discuss the potential impact of unavoidable flaws. |
SWE-Bench-Verified | III.9 | 0 | The benchmark does not include quantitative analysis to assess the impact of unavoidable flaws. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.