Connect with us

AI

Revolutionizing Enterprise AI Costs with Innovative Model Design

Published

on

New model design could fix high enterprise AI costs

Enterprises facing the high costs associated with deploying AI models may find relief through an innovative architecture design.

Generative AI offers impressive capabilities, but the substantial computational requirements for training and inference lead to expensive costs and environmental concerns. The inefficiency stems from the autoregressive process in models that generate text token-by-token sequentially.

For businesses dealing with large data streams, such as IoT networks and financial markets, this bottleneck makes generating extensive analyses slow and economically burdensome. However, a new research paper from Tencent AI and Tsinghua University presents an alternative solution.

Revolutionizing AI Efficiency

The research introduces Continuous Autoregressive Language Models (CALM), a method that predicts a continuous vector instead of a discrete token, redefining the generation process.

An autoencoder with high fidelity compresses a chunk of tokens into a single continuous vector, significantly increasing the semantic bandwidth. This compression reduces the number of generative steps, thereby lowering the computational load.

Experimental results show improved performance-compute trade-off. A CALM AI model grouping four tokens achieved performance comparable to strong discrete baselines but at a much lower computational cost for enterprises.

One CALM model required 44% less training FLOPs and 34% less inference FLOPs compared to a similar-capability baseline Transformer, leading to savings in both initial training and ongoing operational expenses.

Enhancing the Toolkit for Continuous Domain

Transitioning from a finite, discrete vocabulary to an infinite, continuous vector space challenges the standard LLM toolkit. Researchers developed a comprehensive likelihood-free framework to make the new model feasible.

For training, the model utilizes a likelihood-free objective with an Energy Transformer instead of a standard softmax layer or maximum likelihood estimation, rewarding accurate predictions without explicit probability computation.

See also  JPMorgan Chase's AI Investment: A Winning $18 Billion Strategy

A new evaluation metric, BrierLM, based on the Brier score and estimated purely from model samples, replaces traditional benchmarks like Perplexity that rely on likelihoods no longer computed by the model.

The framework restores controlled generation with a new likelihood-free sampling algorithm, enabling a trade-off between output accuracy and diversity.

Minimizing Enterprise AI Expenses

This research hints at a future where generative AI focuses more on architectural efficiency than merely increasing parameter counts.

The current trend of scaling models faces diminishing returns and rising costs. The CALM framework introduces a new approach to LLM scaling by enhancing the semantic bandwidth of each generative step.

While not a ready-to-use product, this research suggests a scalable pathway towards highly efficient language models. Tech leaders evaluating vendor offerings should prioritize architectural efficiency over model size.

The ability to reduce FLOPs per generated token can be a significant competitive advantage, enabling more cost-effective and sustainable AI deployment across enterprises, from data centers to data-heavy edge applications.

Discover more: Flawed AI benchmarks jeopardize enterprise budgets

Interested in learning more about AI and big data from industry experts? Explore the AI & Big Data Expo in Amsterdam, California, and London, part of the TechEx series alongside other leading technology events like the Cyber Security Expo. Click here for more details.

AI News is brought to you by TechForge Media. Explore upcoming enterprise technology events and webinars here.

Trending