ggml.ai joins Hugging Face to ensure the long-term progress of Local AI · ggml-org/llama.cpp · Discussion #19759 · GitHub Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert ggml-org / llama.cpp Public Notifications You must be signed in to change notification settings Fork 15k Star 95.4k ggml.ai joins Hugging Face to ensure the long-term progress of Local AI #19759 ggerganov announced in Announcements ggml.ai joins Hugging Face to ensure the long-term progress of Local AI #19759 ggerganov Feb 20, 2026 · 11 comments · 4 replies Return to top Discussion options Uh oh! There was an error while loading. Please reload this page . Quote reply edited Uh oh! There was an error while loading. Please reload this page . ggerganov Feb 20, 2026 Maintainer – Announcement We are happy to announce that ggml.ai (the founding team of llama.cpp ) are joining Hugging Face in order to keep future AI truly open. Georgi and team are joining HF with the goal of scaling and supporting the ggml / llama.cpp community as Local AI continues to make exponential progress in the coming years. Summary / Key-points The ggml-org projects remain open and community driven as always The ggml team continues to lead, maintain and support full-time the ggml and llama.cpp libraries and related open-source projects The new partnership ensures long-term sustainability of the projects and will help foster new opportunities for users and contributors Additional focus will be dedicated on improving user experience and integration with the Hugging Face transformers library for improved model support Why this change? Since its foundation in 2023, the core mission of ggml.ai has continuously been to support the development and the adoption of the ggml machine learning library. Over the past 3 years, the small team behind the company has been doing its best to grow the open-source developer community and to help establish ggml as the definitive standard for efficient local AI inference. This was achieved through strong collaboration with individual contributors, as well as with partnerships with model providers and independent hardware vendors. As a result, today llama.cpp has become the fundamental building block in countless projects and products, enabling private and easily-accessible AI on consumer hardware. Throughout this development, Hugging Face stood out as the strongest and most supportive partner of this initiative. During the course of the last couple of years, HF engineers (notably @ngxson and @allozaur ) have: Contributed several core functionalities to ggml and llama.cpp Built a solid inference server with polished user interface Introduced multi-modal support to llama.cpp Integrated llama.cpp into the Hugging Face Inference Endpoints Improved compatibility of the GGUF file forma
Source: Hacker News | Original Link