Artificial intelligence stands to revolutionize government agencies — as long as they understand how to procure, implement and use it, according to a new report.
The Administrative Conference of the United States commissioned the report, titled “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies,” from researchers at Stanford and New York Universities. Released in February, it found that 45% of federal agencies have experimented with AI and related machine learning tools and that those agencies are already improving operations in myriad ways, such as monitoring risks to public health and safety and enforcing regulations on environmental protection.
“The growing sophistication of and interest in artificial intelligence (AI) and machine learning (ML) is among the most important contextual changes for federal agencies during the past few decades,” the report stated.
The Department of Justice’s Office of Justice Programs had 12 AI use cases — the most of the responding agencies. The Securities and Exchange Commission and NASA follow with 10 and 9, respectively. In total, the researchers documented 157 use cases across 64 agencies after studying 142 federal departments, agencies and subagencies.
More broadly, the top three policy areas where AI is used were law enforcement, health and financial regulation. In terms of government tasks, regulatory research, analysis and monitoring clocked in at about 80 use cases, while enforcement had about 55, and public services and engagement had about 35.
The report also found that agencies were at different stages of AI and ML use. Fifty-three use cases, or 33%, were fully deployed, whereas roughly 60 were in the planning phase and about 45 were piloting or have partially deployed. More than half — 53% — of the AI and ML use cases were developed in-house, while roughly a third were built by commercial contractors and about 13% involved collaboration with non-commercial entities such as an academic lab.
One agency that uses AI for enforcement is SEC, which has a suite of algorithmic tools to identify violators of securities laws. For example, to detect fraud in accounting and financial reporting, the agency developed the Corporate Issuer Risk Assessment, which has a dashboard of about 200 metrics that can find anomalies in the financial reporting of more than 7,000 corporate issuers of securities. An ML tool identifies filers who might be engaging in suspicious activities by using historical data to predict possible misconduct.
Two other tools — the Advanced Relational Trading Enforcement Metrics Investigation System and the Abnormal Trading and Link Analysis System — look for suspicious trading. ARTEMIS hunts for potential serial insider trading offenders by using an electronic database of more than 6 billion electronic equities and options trading records to study patterns and relationships among traders. ATLAS analyzes for first-time offenders.
Lastly, the Form ADV Fraud Predictor helps predict which financial services professionals who manage more than $25 million in assets annually may be violating federal securities laws. The tool parses Form ADVs, which those professionals must submit to the SEC annually. Ultimately, it flags people as high, medium or low priority for further SEC investigation.
Despite the ways these tools help SEC employees, there are challenges. For instance, many of the documents powering the tools are not in machine-readable formats, the agency struggles to find accurate ground truth for algorithm training data, and it must stay up-to-date on what constitutes wrongdoing.
The report identified a common lack of sophistication across the use cases. Researchers ranked only 12% as highly sophisticated. “This is concerning because agencies will find it harder to realize gains in accuracy and efficiency with less sophisticated tools,” the report stated. “This result also underscores AI’s potential to widen, not narrow, the public-private technology gap.”
More understanding about how agencies use and acquire AI and ML tools is necessary to further adoption, the report said.
“Rapid developments in AI have the potential to reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data, thereby making government performance more efficient and effective,” the report stated. “Agencies that use AI to realize these gains will also confront important questions about the proper design of algorithms and user interfaces, the respective scope of human and machine decision-making, the boundaries between public actions and private contracting, their own capacity to learn over time using AI, and whether the use of AI is even permitted. These are important issues for public debate and academic inquiry.”
Author: Prasanna Haresh Patil