Glaive - Enabling an Open Ecosystem for AI
Delivering a great product experience is a complex orchestration and optimization problem. This is especially true with AI-enabled products, where teams have to balance needs across different axes of performance, such as capability, cost, speed, consistency, privacy, and the like.
I believe we’re heading towards a future where products incorporate many AI models. For each AI-enabled feature, teams will deploy a model that’s optimized for the performance measures that matter most. In some cases they’ll emphasize capability regardless of cost. In others, they’ll optimize for latency while meeting a fixed level of capability. Like a finely tuned orchestra, each model will play its part in harmony, helping to enable the best product experience. We’re already seeing this play out in products being built today by the most sophisticated AI teams. This trend will only accelerate as the AI ecosystem matures in the coming years.
Join more than 2,000 others who subscribe
Open source models, such as Meta’s Llama, are one enabler of this trend. While these models offer a strong base to start from, they require work to make them useful for a specific use-case. Acquiring high-quality data for a given use-case, and training a specific model on that data, has been a friction-filled experience.
Glaive is a startup focused on making it easy to train small, hyper-focused language models for any use-case with the help of a synthetic data generation system. I am excited to share that Spark is investing in Glaive and supporting Glaive’s founder, Sahil Chaudhary, on his mission to democratize access to AI.
In early July, Sahil used Glaive’s platform to fine-tune an open source model from Replit and achieved a pass@1 of 63.5% on the HumanEval benchmark, outperforming every other open source model despite being 5X smaller in size. And while Roon may take issue with the boast of “best open source model”, the permissionless innovation that these emerging technologies are enabling shouldn’t be ignored – something profound is taking shape.
As we got to know Sahil he proposed that we give him a use-case of our choice and he’d train a model for it in less than a day. We gave him The History of America’s West and less than 24 hours later we were interacting with a model trained specifically for this use-case. Our qualitative assessment of the model was that it felt as good or better than GPT-3.5 and on some tasks it felt aligned with GPT-4. Emma wrote a script to conduct a hacky set of evals, which reinforced what we had observed.
The model is only 3B parameters – it’s tiny. This was one remarkable demo.
Today, Glaive is releasing an open source model that has the same function calling capabilities of OpenAI’s GPT-4 and GPT-3.5 but is small enough to run on mobile devices. The model can intelligently choose when to invoke a function call, helping the model browse the internet, operate tools and execute tasks. Performance is comparable to GPT-3.5 and is a 2.7B parameter model.
Sahil has unique experience and insight that led to the development of Glaive. As the founding ML Engineer at Banana, Sahil spent two years helping companies deploy AI models into production, gaining an appreciation for what users want and the consistent challenges they faced. The Glaive platform pairs a data generation pipeline that builds high quality and up-to-date datasets for virtually any task with an automated and optimized pipeline for training models with that data.
If you’re an engineer that’s passionate about the open ecosystem taking shape around AI, and what it’s enabling for product builders, I encourage you to contact Sahil (email@example.com) to say hello as he’s building Glaive’s founding team.
Sahil’s vision is a future where companies and individuals benefit from a fleet of AI models, tailored to specific, narrow tasks. Glaive is working to help enable this future and all of us at Spark are excited to support him on this effort.
Thanks for reading! Join more than 2,000 others who subscribe