The term AGI is being trumpeted everywhere, but will every facet of human behavior really end up being replaced by AI? I remain skeptical.

In software, for example, it already looks as though AI will soon be able to handle every stage of even the largest system projects. Yet I still doubt that an AI can build the system the human requester truly wants. Think of the famous “Tree Swing Cartoon.” It is so well known that it needs no explanation here. In that drawing, “How the customer explained it” corresponds to the requirements a human gives the AI, while “What the customer really needed” is the final outcome the AI is expected to deliver. I doubt that AI will be able to bridge the gap between the two.

In practice, AI will almost certainly outperform humans, so the intermediate phases of system development—programming, analysis, testing, and so on—will produce results superior to those of people. If AI also takes over the project-management role, fiascos as grotesque as those in the cartoon may vanish. But will “what the customer explained” and “what the customer really needed” truly converge just because AI is doing the development?

Human instructions and requirements given to AI will always be vague, riddled with errors and, at times, downright nonsensical. A future, highly capable AI might read our thoughts and emotions, infer our hidden needs, and deliver the optimal result. Yet will people be satisfied with a solution that anticipates their mistakes and unspoken wishes? Some will surely look at the “thing the customer really needed” produced by AI and complain that the AI is useless.

Humans are foolish creatures. We possess less knowledge than AI, are less contemplative, and our performance is easily shaken by emotion. Worse, ordinary people often harbor the arrogance of believing themselves exceptional. As long as we carry this folly, we will hesitate to depend entirely on AI.

Even so, there is no doubt that AI will replace much of what humans do. But so long as we remain foolish, we will still insist on doing things ourselves.

The Hidden Risks of NVIDIA’s Open Model License

Recently, regarding the open-weights AI model “Nemotron 3” released by NVIDIA, there are scattered media reports mistakenly describing it as open source. Because there is concern that these reports encourage ignoring the usage risks of the NVIDIA Open Model License Agreement (version dated October 24, 2025; hereinafter referred to as the NVIDIA License), which is…

The Current State of the Theory that GPL Propagates to AI Models Trained on GPL Code

When GitHub Copilot was launched in 2021, the fact that its training data included a vast amount of Open Source code publicly available on GitHub attracted significant attention, sparking lively debates regarding licensing. While there were issues concerning conditions such as attribution required by most licenses, there was a particularly high volume of discourse suggesting…

The Legal Hack: Why U.S. Law Sees Open Source as “Permission,” Not a Contract

In Japan, the common view is to treat an Open Source license as a license agreement, or a contract. This is also the case in the EU. However, in the United States—the origin point for almost every aspect of Open Source—an Open Source license has long been considered not a contract, but a “unilateral permission”…

Evaluating OpenMDW: A Revolution for Open AI, or a License to Openwash?

Although the number of AI models distributed under Open Source licenses is increasing, it can be said that AI systems in which all related components, including training data, are open are still in a developmental stage, even as a few promising systems have emerged. In this context, this past May, the Linux Foundation, in collaboration…

A Curious Phenomenon with Gemma Model Outputs and License Propagation

While examining the licensing details of Google’s Gemma model, I noticed a potentially puzzling phenomenon: you can freely assign a license to the model’s outputs, yet depending on how those outputs are used, the original Terms of Use might suddenly propagate to the resulting work. Outputs vs. Model Derivatives The Gemma Terms of Use distinguish…

Should ‘Open Source AI’ Mean Exposing All Training Data?

DeepSeek has had a major global impact. This appears to stem not only from the emergence of a new force in China that threatens the dominance of major U.S. AI vendors, but also from the fact that the AI model itself is being distributed under the MIT License, which is an Open Source license. Nevertheless,…

Significant Risks in Using AI Models Governed by the Llama License

Although it has already been explained that the Llama model and the Llama License (Llama Community License Agreement) do not, in any sense, qualify as Open Source, it bears noting that the Llama License contains several additional issues. While not directly relevant to whether it meets Open Source criteria, these provisions may nonetheless cause the…

The Hidden Traps in Meta’s Llama License

— An Explanation of Llama’s Supposed “Open Source” Status and the Serious Risks of Using Models under the Llama License — It is widely recognized—despite Meta’s CEO persistently promoting the notion that “Llama is Open Source”—that the Llama License is in fact not Open Source. Yet few individuals have clearly articulated the precise reasons why…