Connect with us

Startups

Betting Against AI Scaling: Insights from Cohere’s Former Research Lead

Published

on

Why Cohere's ex-AI research lead is betting against the scaling race

AI Labs Pushing Boundaries with Data Centers Comparable to Manhattan

In the fast-paced world of artificial intelligence (AI), labs are in a race to construct data centers on a scale as vast as Manhattan, each with a price tag amounting to billions of dollars and requiring energy consumption equivalent to that of a small city. The driving force behind this endeavor is a firm belief in the concept of “scaling” – the notion that by adding more computing power to existing AI training methods, superintelligent systems capable of performing a myriad of tasks can eventually be achieved.

However, a growing number of AI researchers are now expressing skepticism regarding the scalability of large language models (LLMs), suggesting that alternative breakthroughs may be necessary to enhance AI performance.

Enter Sara Hooker, the former VP of AI Research at Cohere and a Google Brain alumna, who is venturing into uncharted territory with her new startup, Adaption Labs. Collaborating with fellow Cohere and Google veteran Sudip Roy, Hooker’s venture is founded on the premise that scaling LLMs has become an inefficient method for extracting enhanced performance from AI models. Quietly unveiled this month for broader recruitment, Adaption Labs aims to develop AI systems capable of continuous adaptation and learning from real-world experiences in an exceptionally efficient manner. Hooker, who departed Cohere in August, remained tight-lipped about the specifics of this approach or whether the company relies on LLMs or another architectural framework.

“I’m starting a new project. Working on what I consider to be the most important problem: building thinking machines that adapt and continuously learn. We have an incredibly talent-dense founding team + are hiring for engineering, ops, and design. Join us.”

During an interview with TechCrunch, Hooker elaborated on Adaption Labs’ mission of creating AI systems capable of ongoing adaptation and learning from real-world encounters with remarkable efficiency. She emphasized the significance of adaptation in the learning process, drawing parallels to personal experiences like learning to avoid stubbing one’s toe after an initial painful encounter. While AI labs have endeavored to capture this concept through reinforcement learning (RL), which enables AI models to learn from errors in controlled environments, current RL methods do not facilitate real-time learning from mistakes for AI models already deployed in operational settings. This limitation results in AI systems continually “stubbing their toe” without corrective learning mechanisms.

Although some AI labs offer consulting services to assist organizations in fine-tuning their AI models to suit specific requirements, these services come at a steep price. Reportedly, OpenAI mandates customers to invest upwards of $10 million for consulting services related to fine-tuning.

Techcrunch event

San Francisco
|
October 27-29, 2025

Hooker emphasized the necessity to move beyond the conventional scaling approaches and focus on enabling AI systems to efficiently learn from their environment. She envisions a paradigm shift in the dynamics of AI control and utilization, emphasizing the importance of democratizing AI accessibility and shaping its applications for broader societal benefits.

Adaption Labs’ emergence reflects a growing shift within the industry, indicating a wavering faith in the scaling potential of LLMs. A recent study by MIT researchers warned of diminishing returns on the world’s largest AI models, signaling a need for innovative approaches to enhance AI capabilities.

Renowned figures in the AI domain, including Richard Sutton and Andrej Karpathy, have voiced reservations regarding the long-term efficacy of prevailing AI methodologies. The concerns surrounding the limitations of scaling AI models through pretraining have gained traction, prompting a reevaluation of strategies for AI enhancement.

While the industry continues to explore avenues for improving AI models, Adaption Labs stands out with its commitment to pioneering the next breakthrough in AI evolution. The startup’s recent fundraising efforts, reportedly securing a seed round between $20 million to $40 million, underscore its ambitious goals and innovative approach to AI development.

Notably, Hooker’s track record of spearheading AI initiatives geared towards enhancing global research accessibility and fostering diverse talent acquisition positions Adaption Labs as a frontrunner in redefining the future of AI.

If Hooker and Adaption Labs’ vision materializes, challenging the conventional wisdom of scaling AI models, the implications could be profound. While substantial investments have been made in scaling LLMs with the expectation of achieving general intelligence, the pursuit of adaptive learning could not only yield superior outcomes but also offer a more cost-effective and efficient alternative.

Marina Temkin contributed reporting.

See also  The Evolution of Media and Advertising: Insights from Amazon VP Jay Richman

Trending