Artificial Intelligence in Federal Agencies

The Administrative Conference of the United States commissioned the report, titled “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies,” from researchers at Stanford and New York Universities. Released in February, it found that 45% of federal agencies have experimented with AI and related machine learning tools and that those agencies are already improving operations in myriad ways, such as monitoring risks to public health and safety and enforcing regulations on environmental protection. “The growing sophistication of and interest in artificial intelligence (AI) and machine learning (ML) is among the most important contextual changes for federal agencies during the past few decades,” the report stated.

The Department of Justice’s Office of Justice Programs had 12 AI use cases — the most of the responding agencies. The Securities and Exchange Commission and NASA follow with 10 and 9, respectively. In total, the researchers documented 157 use cases across 64 agencies after studying 142 federal departments, agencies and subagencies.

More broadly, the top three policy areas where AI is used were law enforcement, health and financial regulation. In terms of government tasks, regulatory research, analysis and monitoring clocked in at about 80 use cases, while enforcement had about 55, and public services and engagement had about 35. The report also found that agencies were at different stages of AI and ML use. Fifty-three use cases, or 33%, were fully deployed, whereas roughly 60 were in the planning phase and about 45 were piloting or have partially deployed. More than half — 53% — of the AI and ML use cases were developed in-house, while roughly a third were built by commercial contractors and about 13% involved collaboration with non-commercial entities such as an academic lab.

One agency that uses AI for enforcement is SEC, which has a suite of algorithmic tools to identify violators of securities laws. For example, to detect fraud in accounting and financial reporting, the agency developed the Corporate Issuer Risk Assessment, which has a dashboard of about 200 metrics that can find anomalies in the financial reporting of more than 7,000 corporate issuers of securities. An ML tool identifies filers who might be engaging in suspicious activities by using historical data to predict possible misconduct. Two other tools — the Advanced Relational Trading Enforcement Metrics Investigation System and the Abnormal Trading and Link Analysis System — look for suspicious trading. ARTEMIS hunts for potential serial insider trading offenders by using an electronic database of more than 6 billion electronic equities and options trading records to study patterns and relationships among traders. ATLAS analyzes for first-time offenders.

Author: Prasanna Haresh Patil

References: https://gcn.com/articles/2020/03/03/ai-use-cases-federal-agencies.aspx

https://www2.deloitte.com/us/en/insights/industry/public-sector/ai-readiness-in-government.html

https://www.govexec.com/insights/federal-government-ready-ai/