Papers
arxiv:2512.09108

Evolving Excellence: Automated Optimization of LLM-based Agents

Published on Dec 9, 2025
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

ARTEMIS is a no-code platform that uses evolutionary optimization with semantic-aware genetic operators to jointly improve LLM agent configurations across multiple domains without requiring architectural changes.

AI-generated summary

Agentic AI systems built on large language models (LLMs) offer significant potential for automating complex workflows, from software development to customer support. However, LLM agents often underperform due to suboptimal configurations; poorly tuned prompts, tool descriptions, and parameters that typically require weeks of manual refinement. Existing optimization methods either are too complex for general use or treat components in isolation, missing critical interdependencies. We present ARTEMIS, a no-code evolutionary optimization platform that jointly optimizes agent configurations through semantically-aware genetic operators. Given only a benchmark script and natural language goals, ARTEMIS automatically discovers configurable components, extracts performance signals from execution logs, and evolves configurations without requiring architectural modifications. We evaluate ARTEMIS on four representative agent systems: the ALE Agent for competitive programming on AtCoder Heuristic Contest, achieving a 13.6% improvement in acceptance rate; the Mini-SWE Agent for code optimization on SWE-Perf, with a statistically significant 10.1\% performance gain; and the CrewAI Agent for cost and mathematical reasoning on Math Odyssey, achieving a statistically significant 36.9% reduction in the number of tokens required for evaluation. We also evaluate the MathTales-Teacher Agent powered by a smaller open-source model (Qwen2.5-7B) on GSM8K primary-level mathematics problems, achieving a 22\% accuracy improvement and demonstrating that ARTEMIS can optimize agents based on both commercial and local models.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.09108 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.09108 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.09108 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.