— An Explanation of Llama’s Supposed “Open Source” Status and the Serious Risks of Using Models under the Llama License —
It is widely recognized—despite Meta’s CEO persistently promoting the notion that “Llama is Open Source”—that the Llama License is in fact not Open Source. Yet few individuals have clearly articulated the precise reasons why Llama is not Open Source. Moreover, the Llama License is founded on a philosophy entirely different from conventional software licenses, and it contains perilous pitfalls for corporations around the globe who, expecting it to resemble an Open Source license, plan to use Llama in their business.
I have often stated, in various forums, that “Llama is not Open Source; in fact, it’s a hazardous license,” but many people—apparently seeing me as a defender of “Open Source” in Japan—have dismissed my statements as mere dissatisfaction with Llama’s status. Concerned that the true dangers of Llama were not getting through, I resolved to publish two separate articles: one clarifying “why Llama is not Open Source” and another detailing “the risks lurking in AI models subject to the Llama License.”
Originally, these two articles were written for Japanese corporate users, and I believe they received a fairly positive reception. Because I assumed there must already be countless explanations of Llama in English, I was surprised to discover fewer references than expected. Hence, I have decided to translate the Japanese text into English and release it as is. Please note that large portions were translated by machine, so discrepancies from the original Japanese might exist.
Why Is the Llama License Not Open Source?
In this piece, I address each clause of the Llama License, indicating which provisions of the “Open Source Definition” the license fails to fulfill and explaining the resulting issues. Alongside an in-depth discussion of the frequently mentioned 700-million MAU restriction and the embedded AUP, I offer a comprehensive look at various additional, and at times minor, problems. The article also briefly touches upon compliance with the Open Source AI definition. read more.
Significant Risks in Using AI Models Governed by the Llama License
This second article is written in a Q&A format and examines several hazards in the Llama License that may lead—potentially without warning—to the termination of one’s license. It proves especially useful for companies thinking about developing Llama-derived models or integrating Llama into their own services. I focus on issues triggered by the license’s “Acceptable Use Policy incorporated by reference” and the unusually strong licensing conditions that propagate even more aggressively than conventional copyleft. While the article is rooted in risks for Japanese businesses, the concerns are largely universal. I trust that American businesses, too, will find much value in these explanations. read more.
