How Artificial Intelligence Can Help Agencies Improve Performance
Guest Blogger: Alayna Kennedy, with Tatiana Sokolova, and Claude Yusti
Government leaders and stakeholders find that agencies would benefit from increased sharing of effective practices and lessons learned.
The IBM Center for The Business of Government and the Partnership for Public Service are hosting a series of roundtables with government leaders to explore pressing issues surrounding AI, share best practices for addressing solvable challenges, and work toward a roadmap for government to maximize the benefits of AI. These discussions will inform two reports – the first of which will be released on Thursday, Feb 28 -- that describe the impact and potential performance improvements that AI can bring to government in areas such as effective workplaces, skilled workforces, and mission-focused secure programs.
The Partnership and the Center hosted the most recent roundtable on February 5th, convening experts for a discussion on how data, culture, and technology influence the policy decisions that government agencies need to make. This session follows from two previous roundtables, which began on July 17 with discussions on the potential of AI to help government identify AI challenges, and continued on October 24th with a discussion on how AI might impact the workforce.
Below is a summary of the key findings from the February 5th discussion, which addressed:
- focusing on effective data management,
- fostering a culture of innovation,
- developing ethical AI policies, and
- taking action to get started.
Our final roundtable will explore further the key questions of how government can best address bias and promote ethical imperatives in delivering AI solutions. All of these sessions are being conducted in a non-attribution setting to promote candid dialogue among participants.
Focusing on Effective Data Management
AI technology has already helped to solve complex problems in many high-tech private sector organizations. However, many government agencies cannot effectively utilize this technology because of poor data management practices. Legacy systems and processes continue to hamper agencies’ ability to capture and analyze the full range of public sector data. Outdated software tools are often incompatible with AI related programs, making AI development difficult.
Furthermore, agencies face challenges in sharing data with each other, limiting their capacity to benefit from AI. Agencies must cultivate a culture of data stewardship, and ensure that high quality data is available for AI systems decisions. Having processes in place for cleansing and deconflicting data will avoid what some participants referred to as data “fratricide” – when one data source eliminates another, deleting valuable information.
Fostering a Culture of Innovation
In adopting AI, agencies face both technical and cultural challenges. Of the two, cultural challenges can prove both more difficult and more important to address. While most agencies rely on a culture of experience-centric decision making, where analysts use their domain knowledge and past expertise to arrive at a solution, emerging technology calls for data-centric decision making that drives to a result based on assessment of complex and changing information sources.
Similarly, government agencies must also adopt a more agile approach to problem solving, where success is defined by learning from experience in developing a viable product. As an emerging technology, AI systems may periodically involve failure, bias, or technical challenges. Agency leaders need to create the space for producers and users of AI to fail fast and learn quickly from their mistakes. Addressing the inevitable technical challenges of AI systems will allow government to deploy these systems successfully. Since many government employees believe that failure of any kind is unacceptable, leaders should emphasize that technical artifacts and outcomes represent only one measure of success – learning something new also brings value. Change management will thus be a key element to building support for AI over time.
Focusing on learning will allow agencies to build communities of talented people that embrace a culture of innovation. Leaders should focus on empowering workers on the “front lines” – those with experience and domain knowledge. By investing in interfaces that allow employees without deep data science skills to leverage AI in their analyses, agencies can make the technology more accessible and equip their workforce to leverage its full potential.
Developing Ethical AI Policies
Private sector companies continue to be challenged to use AI ethically in order to retain public trust. Government use of AI involves far more than mere social media algorithms – AI can be used to pilot drones, audit taxes, or make decisions about security clearance status. Therefore, government agencies need to establish guardrails around the use of AI technology.
Agencies need to critically assess the ways that decision makers use AI technologies to augment human decision making, including assessing the impact of a decision before applying automation. For example, low-impact and repetitive tasks can be highly-automated, but high-impact tasks like clearance decisions require greater human activity. Algorithms used for such high impact tasks should also be transparent and explainable – agencies should be able to explain how and why a decision was made. At the same time, agencies should build effective cybersecurity protections for AI systems, which will also promote trust.
Government can strengthen partnerships with industry and academic communities like the Institute of Electrical and Electronics Engineers (IEEE) and the Association for the Advancement of Artificial Intelligence (AAAI) to develop core principles of AI ethics. Such principles can apply widely to both technical and non-technical employees.
Taking Action to Get Started
In addition to creating ethical guidelines for AI-enabled tasks, agencies can begin implementing this innovative technology in small ways and building from early experiments. And while ethical guidelines should be built early into the design of new AI systems, agency leaders should refrain from being paralyzed to act due to overanalysis of problems; rather, agencies can implement new solutions through iterative processes. The first measure of success for AI systems should not be perfection, but whether the AI outperforms older methods and what can be learned from each iteration.
Agency leaders can get started on AI development by identifying champions with domain knowledge and enthusiasm for implementing AI to improve processes. Collaboration between leaders and front-line employees will help build momentum for AI adoption.
Implementing effective data management, fostering a culture of innovation, and establishing ethical policy guidelines all reflect important steps for agencies to take to utilize AI technology. Leaders should use these imperatives to take action, starting with small problems where AI can bring value. By iterating on these solutions, agencies can learn and understand how best to use AI consistently with a culture of innovation and progress.
Image courtesy of thesomeday1234 at FreeDigitalPhotos.net