ggml.ai joins Hugging Face to ensure the long-term progress of Local AI #19759
Replies: 49 comments 12 replies
-
|
welcome @ggerganov and team! we're happy to get the chance to continue supporting the awesome llama.cpp community 🔥 |
Beta Was this translation helpful? Give feedback.
-
|
LEEET'S GOOOOOOOO!!! It's been such an honour and privilege to work on llama.cpp and this is the best news for the truly open AI ecosystem and democratising local AI. |
Beta Was this translation helpful? Give feedback.
-
|
That's really cool! Great news for @ggerganov and the team! |
Beta Was this translation helpful? Give feedback.
-
|
congrats! |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations to the entire ggml.ai team and what a fantastic move by HF! |
Beta Was this translation helpful? Give feedback.
-
|
This is awesome! Congrats! 🇪🇺🇪🇺🇪🇺 |
Beta Was this translation helpful? Give feedback.
-
|
Great to hear this news, congrats all around to the entire team and thanks to everyone for all your dedicated efforts over the years! |
Beta Was this translation helpful? Give feedback.
-
|
Good to hear that you managed to secure a partnership for the sustainability of the project! Though I am not part of the formal ggml organization I will happily continue to cooperate on our shared goals. |
Beta Was this translation helpful? Give feedback.
-
|
Very excited about this! |
Beta Was this translation helpful? Give feedback.
-
|
Congrats. All the best for this wonderful project. Keep up the good work! |
Beta Was this translation helpful? Give feedback.
-
|
This is amazing news! |
Beta Was this translation helpful? Give feedback.
-
|
Congrats Georgi, GGML team and Hugging Face! This is amazing news. Llama.cpp is what made open-source local AI what it is today. Excited to see what's coming 🎉🦙 |
Beta Was this translation helpful? Give feedback.
-
|
Congrats to all at GGML and Hugging Face. Excited to see all the GGML efforts move from strength to strength. Glad to see Hugging Face as the acquiring company, I trust the community will remain strong. |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations on this news! All the best. It will be great to have strong Python support for llama.cpp. The current options are limited to either hosting via API (llama.cpp, vllm, ollama etc) or using the largely unmaintained llama-cpp-python library. |
Beta Was this translation helpful? Give feedback.
-
|
Amazing news, congrats guys, more than deserved. Thanks for all your hard work so far. One minor question though, I was a bit confused with this though to be fair I do not know too much about llama.cpp's internals:
Isn't this a python library? I thought llama.cpp aimed at remaining lean and mean c++ only. Thanks for your time |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations, you deserve it. Still can´t thank you enough for the great work and contributions. |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations! Been a GGML believer since whisper.cpp in 2022! |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations! Will |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations! 🎉 |
Beta Was this translation helpful? Give feedback.
-
|
was this even discussed publicly before it happened? we should consider the pros and cons for the project from a variety of perspectives. 90% worry and concern among users in response to this announcement: https://old.reddit.com/r/LocalLLaMA/comments/1r9vywq/ggmlai_has_got_acquired_by_huggingface/ this announcement comes on the heels of leaks showing "Open"AI directly collaborating with the US government to secretly flag and report "suspicious" users to government agencies based on their political exposure: https://vmfunc.re/blog/persona who's to say that HF isn't already implementing this sort of extralegal of identity and political check? it will happen in the future. let's not deliberately set ourselves up to get burned again. to everyone else: please, be aware that your independence has value. |
Beta Was this translation helpful? Give feedback.
-
|
Is there any change in ownership of the code repository? Or is it still considered 'owned' by the same 'ggml-org'? At the very least, clarification of who exactly owns the code now should probably be in the README. |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations 🎉 |
Beta Was this translation helpful? Give feedback.
-
|
hopefully it does not go the way of github and Microsoft acquisition |
Beta Was this translation helpful? Give feedback.
-
|
As a Bulgarian, anywhere I go I talk about your work! I appreciate you! You are the SW "Stoichkov" for me! :) |
Beta Was this translation helpful? Give feedback.
-
|
Congratulation! |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations! |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations! would love to have cpp and rust api with a bit more documentation |
Beta Was this translation helpful? Give feedback.
-
|
Congratulations! Hope see new change of llama.cpp after the joining. :) |
Beta Was this translation helpful? Give feedback.
-
|
Congrats! Hope this move all the AI world to the bright good future! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Announcement
We are happy to announce that ggml.ai (the founding team of
llama.cpp) are joining Hugging Face in order to keep future AI truly open.Georgi and team are joining HF with the goal of scaling and supporting the
ggml/llama.cppcommunity as Local AI continues to make exponential progress in the coming years.Summary / Key-points
ggmlandllama.cpplibraries and related open-source projectsWhy this change?
Since its foundation in 2023, the core mission of ggml.ai has continuously been to support the development and the adoption of the
ggmlmachine learning library. Over the past 3 years, the small team behind the company has been doing its best to grow the open-source developer community and to help establishggmlas the definitive standard for efficient local AI inference. This was achieved through strong collaboration with individual contributors, as well as with partnerships with model providers and independent hardware vendors. As a result, todayllama.cpphas become the fundamental building block in countless projects and products, enabling private and easily-accessible AI on consumer hardware.Throughout this development, Hugging Face stood out as the strongest and most supportive partner of this initiative. During the course of the last couple of years, HF engineers (notably @ngxson and @allozaur) have:
ggmlandllama.cppllama.cppllama.cppinto the Hugging Face Inference Endpointsllama.cppggmlprojects with general maintenance, PR reviews and moreThe teamwork between our teams has always been smooth and efficient. Both sides, as well as the community, have benefited from these joint efforts. It only makes sense to formalize this collaboration and make it stronger in the future.
What will change for
ggml/llama.cpp, the open source project and the community?Not much – Georgi and team will continue to dedicate 100% of their time maintaining
ggml/llama.cpp. The community will continue to operate fully autonomously and make technical and architectural decisions as usual. Hugging Face is providing the project with long-term sustainable resources, improving the chances of the project to grow and thrive. The project will continue to be 100% open-source and community driven as it is now. Expect your favorite quants to be supported even faster once a model is released.Technical focus
Going forward, our joint efforts will be geared towards the following objectives:
Towards seamless “single-click” integration with the transformers library
The
transformersframework has established itself as the ‘source of truth’ for AI model definitions. Improving the compatibility between the transformers and the ggml ecosystems is essential for wider model support and quality control.Better packaging and user experience of ggml-based software
As we enter the phase in which local inference becomes a meaningful and competitive alternative to cloud inference, it is crucial to improve and simplify the way in which casual users deploy and access local models. We will work towards making llama.cpp ubiquitous and readily available everywhere, and continue partnering with great downstream projects.
Long term vision
Our shared goal is to provide the building blocks to make open-source superintelligence accessible to the world over the coming years. We will achieve this together with the growing Local AI community, as we continue to build the ultimate inference stack that runs as efficiently as possible on our devices.
Beta Was this translation helpful? Give feedback.
All reactions