Implementing AI Across the Federal Government
Blog Co-Author: Tom Suder, Founder & President, Advanced Technology Academic Research Center (ATARC)
“Over the past two to five years we’ve really seen a migration of artificial intelligence from the lab out into operations.”
It’s no secret that Americans increasingly find themselves distrustful of their government and skeptical that those in office will “do what is right” for the country. These feelings are in part due to gridlock in Washington and an increase in political polarization. As a result, what often goes unnoticed is the dedication of America’s public servants to work in a nonpartisan manner to develop and implement policies for the benefit of the country as a whole.
One area in which these efforts have borne fruit is the federal government’s adoption of emerging technologies, especially artificial intelligence (AI), in order to better serve the public. Far from happening spontaneously, this adoption is the result of work by expert staff at the Office of Management and Budget (OMB) and other agencies— across administrations—who have built a framework for applying AI to better serve the American public.
To illustrate, in 2016 White House staff produced a report on Preparing for the Future of Artificial Intelligence which sought to analyze “the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy.” The years that followed also saw the signing of Executive Order (EO) 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” which itself built off the foundation of EO 13859, “Maintaining American Leadership in Artificial Intelligence.”
The goal of both Executive Orders was to help ensure that the United States leads the world in the field of AI and leverages the promise of that technology for the betterment of the American people. Both were the product of recommendations from career staff at OMB and the White House.
In the same vein, June 2021 marked the launch of the National Artificial Intelligence Research Resource Task Force. Science Advisor to the President and Office of Science and Technology Policy Director Eric Lander heralded the creation of the Task Force by saying that “America’s economic prosperity hinges on foundational investments in our technological leadership.” During an era of hyper partisanship, America’s civil servants have continued to work to harness the power of AI on behalf of the United States.
From the Lab to the Real World
In order to help federal agencies continue to move from theory to practice and become more operational in the application of AI, the Advanced Technology Academic Research Center (ATARC) and the IBM Center for the Business of Government convened a roundtable discussion featuring sixteen experts representing private industry, academia, thank tanks, and agency leadership. The panel was co-moderated by ATARC Founder and CEO Tom Suder and IBM Center for the Business of Government Executive Director Dan Chenok, who himself served as Branch Chief for Information Policy and Technology at OMB.
In the course of the conversation that followed, panel participants provided examples of AI successfully being used for the public good, debated the costs and benefits associated with ‘open’ and ‘closed’ approaches to data gathering and analysis, and analyzed the factors that lead to a successful deployment of AI in the public sector.
AI for the Public Good
Examples of the successful deployment of AI-powered applications considered by panel participants were as varied as they were impactful. One of the most notable was a pilot program rolled out during the early days of the COVID-19 pandemic to help patients and healthcare professionals understand the prognosis of a coronavirus infection. This information was incredibly powerful in enabling hospital leadership to make informed decisions about where to allocate resources, such as beds and physicians, and ultimately save lives. Because the application was transparent in explaining the factors that guided its recommendations, it also served as an educational tool, allowing physicians and residents to understand the variables that led to a certain prognosis and how they could change over time.
AI is a Team Sport
Another AI-powered application that surfaced during the discussion was one that aimed to prevent harmful behavior in service members, including suicide. The panelist who shared the application with the group explained that it had a very high success rate of identifying at-risk service members. However, they noted that it was ultimately up to human actors to determine how to “turn those insights into action” and identify a valid intervention that would not cause residual problems.
Other practical applications of AI raised during the panel included leveraging data gathered by drones to narrow down the path of hurricanes, managing the influx of cybersecurity events and reviewing logs, and applying natural language processing (NLP) to analyze public comments. One application was even used to predict where floods or saturation might occur and help manage levees, locks, dams, and critical infrastructure waterways.
Open versus Closed
In addition to a discussion of the ways in which AI- powered applications have already been used to contribute to the public good, panelists also discussed the benefits and challenges associated with ‘open’ systems that rely on data sharing and ‘closed’ systems, which restrict access to internal data.
In a world that has already witnessed the inadvertent exposure of sensitive military sites by fitness apps, there is understandable concern around how public data sources may be used by foreign adversaries to harm US interests. In the words of one panelist, “open data may sometimes be used in ways that you’re not expecting.” By way of example, that panelist highlighted the work they were doing in leveraging commercial data to understand changes happening at US military bases and how that could affect national security.
Taking a different tack, another panelist argued that there was also a cost to not sharing data or working collaboratively. In particular, they argued that because the resources of any single organization are inherently limited, pursuing responsible partnerships represents a viable path to realizing objectives that would otherwise be impossible.
In the panelist’s words, “are we losing things by not collaborating or putting a choke or a governor on collaboration? I think the answer is yes...I think you’re starting to see a shift in organizational culture where understanding that there are considerations and use cases where you would open up access to closed data is becoming common and we’re starting to implement those mechanisms.”
Setting up for Success
Panelists also spent time evaluating the conditions necessary for the successful rollout of an AI-powered application in the public sector by examining two successful use cases in greater depth. The first was Pittsburgh, Pennsylvania’s Scalable Traffic Control System (Surtrac).
Once known for having some of the country’s most congested roadways, in 2012 the City of Pittsburgh, Pennsylvania began implementing Surtac in order to better control the City’s traffic and reduce commuters’ travel time. Developed by researchers at Carnegie Mellon University and later commercialized by Rapid Flow Technologies, Surtac was deployed at approximately fifty intersections across Pittsburgh between 2012 and 2015. By analyzing traffic flow clusters to intelligently prioritize traffic lighting, Surtac was ultimately able to decrease intersection wait times by roughly forty percent, vehicle travel time by twenty-five percent, and lower emissions by twenty percent.
Another use case explored during the panel was that of South Korea’s COVID-19 contact tracing system. The system was initially controversial among Korean citizens because it leveraged personal information, in particular geolocation data provided by the country’s mobile carriers, thereby sparking privacy concerns.
However, because Korea had already experienced the SARS epidemic in the early 2000s, the country came into the COVID-19 pandemic with a robust legal framework regarding when and how such sensitive information could be shared with government entities. Furthermore, surveys demonstrated that the Korean public was willing to sacrifice a measure of data privacy for the sake of improved public health, providing legitimacy to government action. The South Korean government also enjoyed a close partnership with the country’s mobile carriers and credit card companies. As a result, South Korea’s contact tracing system moved from ideation to national deployment in just a month.
Taken together, Surtac and South Korea’s contact tracing system provided strong evidence that the successful leveraging of AI in the public sector requires three elements: a strong legal framework, citizen understanding of what data is being collected and how it will be used to benefit them, and public-private partnerships.