Introducing Large Language Model Meta AI (LLaMA)
Hey Everyone,
Meta has BlenderBot, Galactica, and many other experiments in LLMs and around A.I. MetaAI is a decent R&D lab team. It’s not always evident at least to me, what their achievements in A.I. are however.
Most recently they have released a new open-source high performance LLM from Meta (formerly FAIR) called LLaMA.
They say this work is a result of using smaller models trained on more tokens. Meta AI researchers trained LLaMA-65B and LLaMA-33B on 1.4 trillion tokens, and their smallest model, LLaMA-13B on one trillion tokens — more than 3x a similar public model.
LLaMA (Large Language Model Meta AI) achieves results competitive with the best currently released models while being smaller & more efficient — increasing accessibility to this technology for even more researchers working on this important subfield of AI across the globe.
Meta AI seems to think it’s some kind of a good samaritan.
Keep reading with a 7-day free trial
Subscribe to Artificial Intelligence Learning 🤖🧠🦾 to keep reading this post and get 7 days of free access to the full post archives.