The U.S. government is pushing for the widespread application of artificial intelligence, but there's a gap in regulations, according to a report by the U.S. Government Accountability Office (GAO). This lack of guidelines could potentially pose a threat to the nation's security.
Currently, over two dozen U.S. government agencies are already using AI or machine learning, with over 500 planned applications. Despite advanced AI models being developed by tech specialists, political leaders aim to regulate the field in sensitive scenarios. The benefits of AI, such as finding cures for diseases and boosting productivity, are widely recognized, but concerns stem from potential risks like job displacement, misinformation, and algorithmic bias. Experts warn that AI could even give malicious actors new tools for cyberattacks or biological weapons, creating new threats to national security.
In its comprehensive examination, the GAO surveyed 23 U.S. agencies such as the Justice Department, Homeland Security, Social Security Administration, and Nuclear Regulatory Commission. The report states that nearly half of these AI applications were initiated in the past year, highlighting the rapid introduction of AI into the U.S. government.
Most of the current and planned AI applications in the U.S. government – roughly seven out of ten – are either research-based or geared towards enhancing internal government management. For instance, NASA uses AI to monitor volcanic activities worldwide, while the Commerce Department applies it for tracking wildlife or calculating the number of seabirds, whales, or seals (as shown in the image below). The Homeland Security Department utilizes AI to identify "interesting border activities," applying machine learning techniques to camera and radar data.

The GAO report also sheds light on how U.S. agencies quietily use AI. Over 70% of the 1,241 active and planned AI applications are willing to disclose information publicly, but over 350 are considered sensitive and not available, according to the report. Some agencies declined to discuss their AI use, such as the State Department, which listed 71 AI application scenarios but was only able to reveal 10 due to their sensitive nature. While some agencies mention only a few AI applications, these few are intensively scrutinized by regulators and experts, concerned about its potential negative effects.
For example, Justice and Homeland Security departments reported 25 AI applications, a fraction of the 390 by NASA or 285 by the Commerce Department. However, the small number does not suggest a low level of sensitivity, as each application must comply with regulations.
As early as September, the GAO warned that federal law enforcement agencies had conducted thousands of KI-supported facial recognition searches – six such searches by six U.S. agencies between 2019 and 2022. 95% of these searches were conducted without proper KI training requirements for the officers carrying out the searches, raising concerns about bias and potential misuse. Data privacy and security experts have often warned about the risks of overuse of AI in law enforcement, which could lead to misidentifications, wrongful arrests, or discriminatory practices against marginalized communities.
(The GAO's September report on facial recognition, aligning with the Inspector General Report of the Department of Homeland Security, found that several agencies, including the Customs and Border Protection, the intelligence agencies, and the Immigration and Customs Enforcement, may have violated the law by purchasing geolocation histories of U.S. citizens without completing the necessary data protection assessments.)
As federal agencies increasingly rely on AI and automated data analysis to address critical issues, the Office of Management and Budget (OMB) must coordinate their approaches to various topics, including AI acquisitions. OMB has yet to issue a memo outlining how agencies should approach adopting AI responsibly, according to the report.
"The absence of guidelines has resulted in agencies failing to implement fundamental practices for managing AI," the report states. "Federal agencies can develop their own AI guidelines until the OMB provides the necessary ones," the report adds, highlighting an inconsistent policy that is not congruent with common practices and impacts negatively on the wellbeing and security of the American public.
The U.S. government was mandated to provide OMB with AI guidelines by a 2020 law, but missed the September 2021 deadline, not releasing its memo until November 2023, the report states. OMB has agreed with the regulatory agency's recommendation to issue AI guidelines, stating that the November memo was a response to President Biden's October Executive Order on AI security.
[recent articles:]
[AI enrichment data:]
The GAO report highlights several challenges and regulatory guidelines related to the application of artificial intelligence (AI) in the U.S. government, particularly focusing on fraud detection and broader AI governance. Here are the key points:
Challenges
- Fraud Detection:
- Speed and Efficiency: AI can evolve more quickly than human data analysis, offering promise for fraud detection through predictive modeling and structured data sets[2].
- Fraudster Adaptation: Fraudsters are using AI to defraud federal programs, often without the same guidelines and laws that govern government AI use[2].
- Data Integration and Quality:
- Data Standardization: AI models developed from large, non-standardized data sets (e.g., EHRs) may have integration issues, potentially leading to biased datasets[3].
- Transparency and Accountability:
- Lack of Transparency: There are concerns about the lack of transparency in AI decision-making processes, which can have significant implications in patient care[3].
- Error Detection: Medical professionals may lack the training to understand whether errors occurred in AI decision-making processes[3].
- Bias and Error Mitigation:
- Bias Detection: There is a potential for bias when using large data sets to develop and train AI models. Standards and evaluations are needed to detect and mitigate biased outputs[3].
- Privacy and Cybersecurity:
- Data Privacy Risks: AI tools require large amounts of data, raising concerns about patient data privacy. Cybersecurity issues are also a concern, as seen in recent attacks on the health sector[3].
- HIPAA Compliance: The Health Insurance Portability and Accountability Act (HIPAA) and its implementing regulations serve to protect health data, but these regulations may need to be updated to meet the challenges created by AI systems[3].
- Interoperability:
- Integration with Existing Systems: AI-enabled tools must integrate with existing healthcare systems, including EHR systems, which can pose additional challenges[3].
- Liability:
- Accountability: There is limited legal and ethical guidance regarding accountability when AI produces incorrect diagnoses or harmful recommendations. This complexity arises because multiple parties are involved in developing and deploying AI systems[3].
Regulatory Guidelines
- Executive Orders and OMB Guidelines: Federal requirements for AI governance have been established through Executive Orders and the Office of Management and Budget (OMB) to ensure processes are in place for responsible AI innovation and risk management[1].
- GAO Oversight: The GAO has performed oversight work to assess whether Federal agencies are in alignment with Federal requirements and best practices for AI use[1].
- NIST Frameworks: The National Institute of Standards and Technology (NIST) has released AI frameworks that offer practices to help agencies ensure the responsible use of AI[1].
In summary, the GAO report highlights the need for robust governance, transparency, and accountability in AI use within the U.S. government, addressing challenges related to fraud detection, data integration, bias, privacy, cybersecurity, interoperability, and liability.