Meta’s Novel Large-Language Model AI, “LLaMA,” is Announced by Mark Zuckerberg

Facebook augmented reality device

PARIS, FRANCE – MAY 24: Facebook’s founder and CEO Mark Zuckerberg speaks to participants during the Viva Technologie show at Parc des Expositions Porte de Versailles on May 24, 2018 in Paris, France. Viva Technology, the new international event brings together 5,000 startups with top investors, companies to grow businesses and all players in the digital transformation who shape the future of the internet. (Photo by Chesnot/Getty Images)

The user-facing Computer game now has a new adversary. The Large-Language Model AI, affectionately referred to as “LLaMA,” from Mark Zuckerberg’s Meta, has formally entered the fray.

On paper, LLaMA doesn’t appear to be positioned as a direct rival to Google’s own chatty algorithm, Bard, or ChatGPT, Microsoft’s conversational AI. According to a news release from Meta, LLaMA’s primary function will be as an AI that advances AI.

LLaMA “requires far less computing capacity and resources to test new approaches, validate others’ work, and explore new use cases,” as Meta himself phrased it. LLaMA is intended to be an effective, relatively low-demand tool that works best for narrowly focused activities.

That creates a brand-new area for AI research, at least in terms of tools aimed at customers. The chat assistant model, which calls for an AI powerful enough to handle any query a user gives it, has been used to create the majority of user-friendly AIs to date. The goal of LLaMA is to facilitate that for other AI without aggravating the issue at hand.

Not an all-purpose tool, which is a positive thing.

science behind the Facebook infinity logo    

The news release from Meta was notable for its explicit mention of LLaMA’s restrictions and the safeguards Meta used to create it. LLaMA, a small model trained on a large language base, is “easier to retrain and fine-tune for specific possible product use cases,” according to Meta, who recommends specific use cases. That significantly reduces the range of uses for AI.

Furthermore, Meta will initially only allow “academic researchers; those affiliated with organizations in government, civic society, and academia; and industry research laboratories around the globe” to access LLaMA. The business makes it obvious that it is concerned about the social implications of AI by admitting that LLaMA and all AI operate with “risks of bias, toxic comments, and hallucinations.” By carefully choosing users, making their code completely accessible to all users to check for bugs and biases, and publishing a set of helpful benchmarks for assessing malfunctions, Meta is attempting to combat this.

In other words, even though Meta appears to have arrived late at AI School, at least they completed the reading. Users are growing more concerned about the possible negative effects of AI, so Meta may be demonstrating the best course of action: progress restrained by careful consideration to ensure ethical use.

Leave a Comment

Scroll to Top