TL;DR

The AI regulatory landscape is evolving rapidly. North American companies must navigate US federal guidance, state-level regulations (particularly California), Mexico's emerging AI framework, and the EU AI Act's extraterritorial reach. Proactive compliance is significantly cheaper than reactive remediation.

The Regulatory Landscape

United States: Federal Level

No comprehensive federal AI legislation yet. Primary frameworks:

  • Executive Order on AI (October 2023): Directed federal agencies to develop AI safety standards. Required developers of powerful AI systems to share safety test results with government.
  • NIST AI Risk Management Framework (2023): Voluntary framework for managing AI risks across four functions: Govern, Map, Measure, and Manage. Increasingly referenced by regulators and courts as a standard of care.
  • Sector-specific regulations: Financial services (OCC, CFPB guidance on AI in lending), healthcare (FDA guidance on AI/ML-based medical devices), employment (EEOC guidance on AI in hiring).

United States: State Level

California (most active state regulator):

  • CCPA/CPRA: Requires disclosure of automated decision-making. Consumers can opt out
  • AB 2013 (2024): Requires disclosure of training data for AI systems
  • SB 1047 (vetoed 2024): Would have required safety testing for large AI models — signals future direction

Colorado, Connecticut, Texas: Have passed or are considering AI-specific legislation focused on algorithmic discrimination in high-stakes decisions (employment, credit, housing).

Mexico

No comprehensive AI legislation yet. Applicable frameworks:

  • LFPDPPP (Federal Law on Protection of Personal Data): Governs data privacy. Applies to AI systems that process personal data
  • NOM standards: Sector-specific technical standards that may apply to AI in regulated industries
  • COFECE guidance: Competition authority has issued guidance on algorithmic pricing and collusion risks

Mexico's government has signaled intent to develop an AI regulatory framework aligned with international standards.

EU AI Act (Extraterritorial Reach)

Applies to any company that places AI systems on the EU market or whose AI systems affect EU residents — regardless of where the company is based.

Risk-based classification:

  • Unacceptable risk: Prohibited (social scoring, real-time biometric surveillance)
  • High risk: Strict requirements (employment decisions, credit scoring, critical infrastructure)
  • Limited risk: Transparency requirements (chatbots must disclose they are AI)
  • Minimal risk: No specific requirements

North American companies with EU customers or employees must comply with the EU AI Act for applicable use cases.

Practical Compliance Steps

  1. Inventory your AI systems — document all AI tools and their use cases
  2. Classify risk levels — apply the EU AI Act risk framework to each system
  3. Implement transparency — disclose AI use in customer-facing applications
  4. Audit for bias — test AI systems for discriminatory outcomes in high-stakes decisions
  5. Document governance — maintain records of AI system design, training data, and testing

Key Takeaways

Key Takeaways
  • The US lacks comprehensive federal AI legislation but has sector-specific guidance.
  • California is the most active state AI regulator — its laws set de facto national standards.
  • The EU AI Act has extraterritorial reach — North American companies with EU exposure must comply.
  • Mexico's AI regulatory framework is emerging — align with international standards proactively.
  • Proactive compliance is significantly cheaper than reactive remediation.