America’s AI Plan: A Big Vision Requires (Very) Smart Execution

This article will count 0.25 units (15 minutes) of unverifiable CPD. Remember to log these units under your membership profile.


On 23 July 2025, the White House released the Winning the Race: America’s AI Action Plan (the Plan), a bold roadmap laying out over 90 federal policy actions to secure the U.S. lead in artificial intelligence. The Plan focuses on innovation, infrastructure, and international leadership, and it aims to boost economic strength, national security, and public trust in AI. But as government agencies ramp up their AI use, the AuditBoard offers a timely reminder from a recent survey: success depends on execution, not just ambition.

What’s in the AI Action Plan?

The Plan focuses on three major pillars:

  1. Accelerate AI Innovation

    • Promote open-source and open-weight models

    • Review and rescind outdated or restrictive policies

    • Require government AI to be “objective and non-ideological”

    • Streamline federal procurement and deployment

  2. Build American AI Infrastructure

    • Fast-track data center and chip factory approvals

    • Expand domestic semiconductor capacity

    • Develop secure computing environments for military and intelligence

    • Invest in workforce training and cybersecurity

  3. Lead Globally in AI Diplomacy and Security

  • Export American AI technology and standards to allies

  • Tighten controls on sensitive tech exports

  • Shape global norms for safe, responsible AI

Back to Reality: The AI Governance Gap

AuditBoard’s survey, based on responses from 412 U.S. executives in audit, IT, and compliance, revealed:

  • 80% are highly concerned about AI risks

  • Only 25% have fully implemented AI governance programs

  • Roughly 1 in 3 rely on third-party AI without clear risk controls

  • Many lack foundational practices like usage logging, model documentation, or clear access policies.

The report’s core message? Execution and accountability, not just tools or dashboards are the real challenges.

How to Make it Work: 5 Critical Ingredients

The U.S. government already includes key governance commitments covering AI inventories, risk assessments, and procurement modernization. The following suggestions build on those strengths.

  1. Start with the Basics Right: Know What You’re Using and Who Can Access It

    The AI Action Plan instructs government agencies to keep track of where and how AI is being used. To make this effective, agencies should go beyond general use cases and maintain detailed lists of the actual AI models in operation, who is using each system and when, through proper usage logs and audit trails. Just as important are clear, documented rules on who has access to these tools and under what conditions.

    Getting these basics right lays a solid foundation for future growth. Without this clarity, expanding or automating AI systems could expose gaps or risks that are much harder to fix later.

  2. Clarify Governance Ownership Across Teams

    The Plan designates Chief AI Officers (CAIOs) to coordinate agency AI use. Building on this:

    • Ensure CAIOs have support from cross-functional teams (legal, IT, procurement, ethics, security)

    • Clearly define who reviews AI risks, who signs off, and how issues are escalated

    • Foster interdepartmental alignment on policy application

    Clarity here ensures responsibility doesn’t get lost between roles.

  3. Scale Governance Gradually—Then Automate

    The Plan supports modernizing AI procurement and oversight. A smart sequence:

    • Start with manual workflows and human oversight

    • Focus on training based on function and risk level

    • Automate only after these systems are reliable and widely adopted

    This helps ensure automation enhances governance rather than replicating blind spots.

  4. Shift to Continuous Monitoring and Real-Time Risk Response

    The Plan encourages agencies to update inventories and risk profiles. Going further:

    • Move from annual reviews to ongoing monitoring

    • Create incident response teams and regular check-ins

    • Update governance policies as tools and threats evolve.

    This aligns governance with the speed of AI model development.

  5. Lead by Example: Model Transparency and Best Practice

The federal government already promotes transparent AI and open standards. To amplify this:

  • Showcase strong governance processes publicly

  • Share playbooks and frameworks with other sectors

  • Encourage compliance by tying procurement eligibility or funding to governance readiness.

This positions the U.S. not just as a leader in AI innovation but in trusted AI governance.

In Conclusion

The America’s AI Action Plan sets a strong strategic direction. By pairing its ambitious goals with practical, grounded governance practices, the U.S. can build a future where AI is not only powerful, but safe, accountable, and trusted.


 

Trending


Latest Podcast



Next
Next

From Spreadsheets to Smart Bots: Why Even the Smallest Accounting Practice Needs AI (Now)