Startups
Revolutionizing AI Development: Inception’s $50 Million Investment in Code and Text Diffusion Models
AI Startup Inception Raises $50 Million in Seed Funding for Diffusion-Based AI Models
Amidst the influx of funding in AI startups, now is an opportune time for AI researchers with innovative ideas to explore. Inception, a startup specializing in diffusion-based AI models, recently secured $50 million in seed funding. The funding round was spearheaded by Menlo Ventures, with contributions from Mayfield, Innovation Endeavors, Microsoft’s M12 fund, Snowflake Ventures, Databricks Investment, and Nvidia’s venture arm NVentures. Noteworthy figures in the AI industry, Andrew Ng and Andrej Karpathy, also provided angel funding for the project.
Stanford professor Stefano Ermon leads the Inception project, focusing on diffusion models that generate outputs through iterative refinement rather than word-by-word generation. These models, such as Stable Diffusion, Midjourney, and Sora, power image-based AI systems. Ermon’s expertise in these systems predates the AI boom, positioning Inception to apply these models to a broader spectrum of tasks.
Alongside the funding, Inception unveiled a new iteration of its Mercury model tailored for software development. Mercury has already been integrated into various development tools like ProxyAI, Buildglare, and Kilo Code. Ermon emphasizes that the diffusion approach adopted by Inception’s models optimizes two crucial metrics: latency (response time) and compute cost.
Ermon states, “These diffusion-based LLMs are much faster and much more efficient than what everybody else is building today. It’s just a completely different approach where there is a lot of innovation that can still be brought to the table.”
Unlike auto-regression models prevalent in text-based AI services, diffusion models take a holistic approach, modifying the overall structure of a response incrementally until it aligns with the desired outcome. While auto-regression models like GPT-5 and Gemini operate sequentially, predicting each subsequent word based on the preceding context, diffusion models prove advantageous in handling large volumes of text or data restrictions, particularly in operations across extensive codebases.
Techcrunch Event in San Francisco on October 13-15, 2026
Moreover, diffusion models exhibit enhanced hardware utilization flexibility, a critical advantage as AI infrastructure requirements intensify. Unlike auto-regression models that execute operations one by one, diffusion models can process multiple operations simultaneously, resulting in significantly reduced latency for complex tasks.
Ermon highlights, “We’ve been benchmarked at over 1,000 tokens per second, which is way higher than anything achievable using existing autoregressive technologies because our approach is parallel and designed for exceptional speed.”
-
Facebook4 months agoEU Takes Action Against Instagram and Facebook for Violating Illegal Content Rules
-
Facebook4 months agoWarning: Facebook Creators Face Monetization Loss for Stealing and Reposting Videos
-
Facebook4 months agoFacebook Compliance: ICE-tracking Page Removed After US Government Intervention
-
Facebook4 months agoInstaDub: Meta’s AI Translation Tool for Instagram Videos
-
Facebook2 months agoFacebook’s New Look: A Blend of Instagram’s Style
-
Facebook2 months agoFacebook and Instagram to Reduce Personalized Ads for European Users
-
Facebook2 months agoReclaim Your Account: Facebook and Instagram Launch New Hub for Account Recovery
-
Apple4 months agoMeta discontinues Messenger apps for Windows and macOS

