Skip to content

Artificial Intelligence Advancements and Hazards to Businesses: Potential Perils and Fixes

Artificial Intelligence's Business Impact: Risks Emerging and Potential Remedies - The Sumsuber's Guidelines for KYC/AML Compliance

Artificial Intelligence Development: Potential Risks for Corporations and Potential Resolutions
Artificial Intelligence Development: Potential Risks for Corporations and Potential Resolutions

Artificial Intelligence Advancements and Hazards to Businesses: Potential Perils and Fixes

In the rapidly evolving landscape of artificial intelligence (AI), the fight against AI-generated fraud and deepfakes has become a global priority. The latest regulations and guidelines, as of 2023 and beyond, are primarily focused on the EU AI Act, the U.S. AI Action Plan, and China's Global AI Governance Action Plan, alongside emerging international cooperation frameworks.

The European Union's AI Act, set to be enforced in 2024, is a landmark regulation that classifies AI systems into four risk levels—unacceptable, high, limited, and minimal. High-risk AI, which may include systems potentially used for fraud or deepfakes, will require strict compliance, mandating transparency, human oversight, and risk mitigation. The Act also applies extraterritorially to organizations outside the EU that provide AI systems in the EU market [1][2].

The Council of Europe Framework Convention (2024) focuses on AI respecting human rights, democracy, and the rule of law, addressing issues such as discrimination, privacy breaches, and risks to democratic processes, which are relevant to the manipulation risks posed by deepfakes and fraud via AI-generated content [1].

The U.S. AI Action Plan, announced in July 2025, emphasizes deregulation aimed at spurring investment and innovation but includes export controls on AI tech to competitors. It focuses on workforce development and global strategy rather than imposing stringent AI usage restrictions, but will reshape compliance requirements for businesses leveraging AI [3].

China’s Action Plan for Global AI Governance (July 2025) proposes a 13-point roadmap enhancing infrastructure, data security, risk management, and international collaboration. It specifically calls for the establishment of a global AI cooperation organization to help coordinate regulation globally and mitigate risks such as monopolistic control by a few entities [4].

Additional trends seen in global regulatory efforts include sector-specific guidance (e.g., financial services, healthcare), data protection laws intertwined with AI rules, and ongoing development of risk management frameworks to address AI-generated misinformation and fraud [5].

In summary, the EU leads with stringent, risk-based AI regulation targeting harms from AI-generated fraud and deepfakes, while the U.S. focuses on balancing deregulation with global strategic controls, and China pushes for coordinated international oversight via a new global AI body. These efforts collectively shape the evolving global governance landscape for AI systems beyond 2023.

For more detailed examples of specific rules on AI-generated fraud or deepfakes within these frameworks, further elaboration is available. For instance, the Revised Product Liability Directive enables civil compensation claims against manufacturers for harm caused by defective products embedded with AI systems. In the U.S., states have taken initiatives on AI regulations, with examples being Virginia's law on pornographic deepfakes and California and Texas' laws on deepfakes and elections. The Chinese approach to AI regulations generally follows a vertical strategy, where the country regulates one AI application at a time.

The Biden-Harris Administration Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence is an example of federal attempts to regulate the use of AI by federal agencies in the U.S. The UK AI strategy focuses on AI safety and delegating the task of regulating AI to existing regulatory agencies through a sectoral approach. The ISO IEC Standard 42001:2023 sets out requirements for the establishment of an AI Management System for businesses.

It is worth noting that AI-powered fraud was the most trending type of attack in 2023, with a tenfold increase in the number of deepfakes detected worldwide compared to 2022. As the world continues to grapple with the challenges and opportunities presented by AI, these regulations and guidelines will play a crucial role in shaping the future of this technology.

  1. The European Union's AI Act, due to be enforced in 2024, will mandate strict compliance for high-risk AI systems, such as those potentially used for fraud or deepfakes, by requiring transparency, human oversight, and risk mitigation, potentially impacting the finance and business sectors that rely heavily on technology.
  2. In response to the growing threat of AI-generated fraud and deepfakes, the Biden-Harris Administration has issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, aiming to regulate the use of AI by federal agencies in the United States, thus influencing the artificial-intelligence landscape in the business and technology sectors.

Read also:

    Latest