CreativeBench: Benchmarking and Enhancing Machine Creativity via Self-Evolving Challenges
Abstract
Researchers developed CreativeBench, a benchmark for evaluating machine creativity in code generation, and proposed EvoRePE, a method to enhance creative output through evolutionary search patterns.
The saturation of high-quality pre-training data has shifted research focus toward evolutionary systems capable of continuously generating novel artifacts, leading to the success of AlphaEvolve. However, the progress of such systems is hindered by the lack of rigorous, quantitative evaluation. To tackle this challenge, we introduce CreativeBench, a benchmark for evaluating machine creativity in code generation, grounded in a classical cognitive framework. Comprising two subsets -- CreativeBench-Combo and CreativeBench-Explore -- the benchmark targets combinatorial and exploratory creativity through an automated pipeline utilizing reverse engineering and self-play. By leveraging executable code, CreativeBench objectively distinguishes creativity from hallucination via a unified metric defined as the product of quality and novelty. Our analysis of state-of-the-art models reveals distinct behaviors: (1) scaling significantly improves combinatorial creativity but yields diminishing returns for exploration; (2) larger models exhibit ``convergence-by-scaling,'' becoming more correct but less divergent; and (3) reasoning capabilities primarily benefit constrained exploration rather than combination. Finally, we propose EvoRePE, a plug-and-play inference-time steering strategy that internalizes evolutionary search patterns to consistently enhance machine creativity.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CREATE: Testing LLMs for Associative Creativity (2026)
- Beyond Divergent Creativity: A Human-Based Evaluation of Creativity in Large Language Models (2026)
- Sparking Scientific Creativity via LLM-Driven Interdisciplinary Inspiration (2026)
- Is this Idea Novel? An Automated Benchmark for Judgment of Research Ideas (2026)
- Mirroring the Mind: Distilling Human-Like Metacognitive Strategies into Large Language Models (2026)
- Grounding Machine Creativity in Game Design Knowledge Representations: Empirical Probing of LLM-Based Executable Synthesis of Goal Playable Patterns under Structural Constraints (2026)
- Creative Image Generation with Diffusion Model (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper