News Report Technology
May 14, 2026

Adaption’s AutoScientist Automates Model Fine-Tuning With Closed-Loop Training Outperforming Human-Designed Configurations 

In Brief

Adaption unveils AutoScientist, a system that automatically customises AI models by optimising both training data and learning processes for specific tasks.

Adaption’s AutoScientist Automates Model Fine-Tuning With Closed-Loop Training Outperforming Human-Designed Configurations 

Adaption, an AI startup founded by former Cohere Vice President of Research Sara Hooker, has introduced a new system called AutoScientist, designed to automate the process of tailoring AI models to specific tasks by jointly optimising both training data and learning configurations. The system is positioned as a step toward automating AI research and development workflows, with the aim of reducing the manual effort typically required in model fine-tuning and experimentation.

AutoScientist is described as an end-to-end framework that co-optimises datasets and training recipes simultaneously, iterating through a closed loop in which both data selection and model training parameters are continuously adjusted. The process is intended to continue until performance stabilises around a defined objective, effectively allowing the system to refine both what the model learns from and how it learns it without constant human intervention.

According to the company, the tool is intended to reduce the time required to move from an initial concept to a deployed, customised model, potentially compressing development cycles from weeks to hours. It is also presented as a mechanism that broadens access to model customisation beyond machine learning specialists, enabling users without deep technical expertise to influence not only prompts but also the underlying behaviour of trained systems. The approach is framed as particularly relevant for organisations seeking to fine-tune models for domain-specific language, structured outputs, or efficiency constraints such as latency and cost, while leveraging proprietary datasets more effectively within AI systems.

Internal evaluations referenced by the company suggest that AutoScientist demonstrates improved performance compared with baseline models across a range of dataset sizes between 5,000 and 100,000 examples, as well as across multiple model architectures available for fine-tuning. Reported results indicate consistent gains regardless of domain, with performance measured using in-house evaluations tailored to specific vertical applications.

Further comparisons presented in the evaluation framework indicate that AutoScientist achieved higher average performance than configurations designed by human researchers, including experienced AI engineering staff. In these tests, human experts selected training setups based on their knowledge of model architecture, dataset characteristics, and domain requirements, while AutoScientist was given the same inputs along with the ability to iteratively refine its own configurations using historical run data. Under these conditions, aggregate outcomes reportedly improved from 48 percent to 64 percent when using the automated system, with an average performance uplift of approximately 35 percent across experiments.

AutoScientist Shows Cross-Domain Stability While Aiming To Democratise Frontier Model Fine-Tuning 

Additional benchmarking across multiple application areas suggests that the system is not strongly sensitive to specific domains, with gains observed across eight different verticals. The company reports that this consistency is notable given that many traditional fine-tuning approaches tend to underperform outside narrow or highly curated settings, whereas AutoScientist reportedly delivers more stable improvements across varied tasks and datasets.

The system is positioned as part of a broader effort to automate model development processes, particularly in areas involving long-horizon reasoning, which remains a persistent challenge in AI reliability. The developers indicate that AutoScientist represents an early step toward reducing the need for manual intervention in model training pipelines, with future research directions focused on enabling more immediate forms of adaptation that may not require traditional training cycles.

Alongside its technical objectives, the release is also framed as an effort to broaden access to model customisation, allowing a wider range of users to shape AI systems for specific applications. The tool is being made available free of charge for an initial 30-day period. The broader aim, according to the framing provided, is to reduce barriers to AI model development and expand the ability to create tailored systems beyond a small group of specialised researchers concentrated in major laboratories.

A key contextual argument highlighted in the announcement is that only a small number of people globally possess the expertise required to properly train and fine-tune frontier AI models, with most of this knowledge concentrated within a limited number of major research laboratories. It is suggested that if a system such as AutoScientist is able to successfully automate aspects of this expertise, the process of building customised models for individual organisations and specific use cases could become more accessible and practically achievable.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Alisa, a dedicated journalist at the MPost, specializes in crypto, AI, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles
Alisa Davidson
Alisa Davidson

Alisa, a dedicated journalist at the MPost, specializes in crypto, AI, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now

Solana has demonstrated strong performance, driven by increasing adoption, institutional interest, and key partnerships, while facing potential ...

Know More

Crypto In April 2025: Key Trends, Shifts, And What Comes Next

In April 2025, the crypto space focused on strengthening core infrastructure, with Ethereum preparing for the Pectra ...

Know More
Read More
Read more
$450M Frozen And Counting: Tether-Backed T3 Financial Crime Unit Expands Global Crackdown On Illicit Crypto Flows
News Report
$450M Frozen And Counting: Tether-Backed T3 Financial Crime Unit Expands Global Crackdown On Illicit Crypto Flows
May 14, 2026
BNB Chain Takes Aim At Tomorrow’s Cyber Threats With Quantum-Resistant Upgrade
News Report Technology
BNB Chain Takes Aim At Tomorrow’s Cyber Threats With Quantum-Resistant Upgrade
May 14, 2026
Beauty’s AI Gold Rush: What L’Oréal’s Startup Program Really Tells Us About The Industry’s Future
Opinion Business Technology
Beauty’s AI Gold Rush: What L’Oréal’s Startup Program Really Tells Us About The Industry’s Future
May 14, 2026
Meta Unveils Muse Spark-Powered AI Voice Conversations With Real-Time Visual Intelligence And Multimodal Responses
News Report Technology
Meta Unveils Muse Spark-Powered AI Voice Conversations With Real-Time Visual Intelligence And Multimodal Responses
May 14, 2026