Viewpoint: ‘AI Action Plan’ Does Not Address

 On July 23, 2025, the White House released its policy document entitled Winning the Race: America’s AI Action Plan. The Action Plan provides a thorou



gh analysis of strategies, measures, and potential concerns to guide the future development of AI. It comprehensively addresses issues and recommends policy a


ctions under the framework of three “pillars,” identified as innovation, infrastructure, and international diplomacy and security.


However, a key issue is not addressed. That is the urgent need to understand the way Artificial General Intelligence (“AGI”), Superintelligence, and Alternate Intelligence will increase the Black Box problem.


Within the section called “Pillar I: Accelerate AI Innovation,” the AI Action Plan, some aspects of the Black Box probl


em are identified. Within a point entitled Invest in AI Interpretability, Control, and Robustness Breakthroughs, the Action Plan identifies part of the concern as follows:


Today, the inner workings of frontier AI systems are poorly understood. Technologists know how LLMs work at


a high level, but often cannot explain why a model produced a specific output. This can make it hard to predict the behavior of any specific AI system. This lack


of predictability, in turn, can make it challenging to use advanced AI in defense, national s


ecurity, or other applications where lives are at stake. The United States will be better able to use AI systems to thei


r fullest potential in high-stakes national security domains if we make fundamental breakthroughs on these research problems.


The Action Plan recommends (1) launching a technology development program to advance AI interpretability, AI control systems, and adversarial robus


See more beautiful photo albums Here >>>


tness, (2) prioritizing fundamental advancements in AI interpretability, control, and robustness, and (3) coordinating an “AI hackathon” initiative to attract the best talent to test AI systems.


This is commendable as far as it goes, which is not nearly far enough. It does not specifically highlight the critical need to understand the effects and consequences of the ongoing competition to create systems operating at higher levels of intelligence.


First, nation-state developers and private developers are racing toward AGI. They give the term various meanings, but most common definitions use it to refer to advanced AI systems that can perform a broad range of tasks faster and better than humans, or AI that is at least as competent as humans in most cognitive tasks. Google and others have warned that AGI could greatly empower Agentic AI systems to plan and execute actions autonomously. They warn that this increases the risk of real world consequences of misalignment, i.e., an AI system pursuing goals and taking actions that it knows the developer did not intend. It also enhances other risks such as misuse, mistakes, and structural risks (defined as harms from multi-agent dynamics and conflicting incentives). Google sets this out in detail in its April 2025 paper entitled An Approach to Technical AGI Safety & Security.

Đăng nhận xét

Mới hơn Cũ hơn

Support me!!! Thanks you!

Join our Team