Tuesday, April 9, 2024
The IBM Center and the Partnership for Public Service hosted a roundtable with government leaders and stakeholders on how the responsible use of artificial intelligence can benefit the public by improving agency service delivery.

Blog Author: Phaedra Boinodiris, Global Leader for Trustworthy AI, IBM

In a follow-up to our first post summarizing the roundtable, IBM's global leader for Trustworthy AI – who helped to keynote the session – shares additional perspectives on this important issue, and especially on the importance of sound AI governance to support improved efforts by government.

Navigating the Global AI Governance Terrain

As we stand, the OECD Policy Observatory has cataloged an extensive array of national AI governance initiatives, underscoring the global momentum towards integrating AI in public sector operations. This burgeoning landscape is not without its challenges, however, as it calls for a nuanced understanding of governance that transcends mere compliance and delves into the ethical and societal implications of AI deployment.

The Imperative for Agency-Specific AI Strategy

For government agencies, the path forward is not one-size-fits-all. Crafting an AI strategy requires a deep dive into agency-specific needs, with a proactive stance on evaluating priorities, processes, and the broader impact of AI applications. Central to this is the development of AI literacy across the agency, fostering collaborations with academia and industry, and establishing centers of excellence that serve as hubs of innovation and ethical AI practice.

Addressing Common Challenges and Themes

Government agencies are uniquely positioned to balance AI governance with societal needs, national security concerns, and economic prosperity. Unlike private entities that often primarily focus on efficiency and shareholder value, governmental bodies must navigate the additional layer of societal responsibility, ensuring that the AI models that they procure or produce reflects the needs of the diverse communities that they serve. This effort necessarily includes embracing transparency, ensuring fairness and privacy, and fostering accountability—a multifaceted approach that demands rigorous, ongoing attention to ethics and AI governance.

Applied Training is necessary to Operationalize AI Governance

Operational excellence in AI governance hinges on a participatory, inclusive culture that emphasizes responsibility and collaboration. Designating accountable leaders, investing in applied governance training, incorporating ethics and governance throughout the entirety of the AI lifecycle and evaluating AI impact beyond algorithmic assessments are foundational steps. Moreover, fostering a culture of AI literacy and ethical consideration is paramount, necessitating a shift towards continuous education and a reevaluation of assumptions as needed.

Use Hackathons as a vehicle

A promising approach to achieve applied training is by leveraging hackathons as platforms for hands-on governance training and innovation. More and more agencies are using hackathons as a way for the employees to begin developing Gen AI models as pilots to improve an employee’s experience or citizen service delivery. Here’s how government agencies can operationalize AI governance and foster a culture of ethical AI development through well-structured hackathons:

Pre-Hackathon Preparation: Setting the Ethical Framework

The journey begins months before the actual hackathon. A designated governance leader, ideally someone with deep knowledge in AI ethics and governance, should host a keynote to introduce hackathon participants to the critical aspects of AI ethics. This step ensures that participants start with a strong understanding of the ethical considerations fundamental to their projects.

Judging Criteria: Emphasizing Governance

To align the hackathon outcomes with governance objectives, the agency responsible for establishing AI policies should act as a judge. The evaluation criteria must be clearly defined to include governance artifacts such as documentation outputs (factsheets, audit reports), layers-of-effect analysis (identifying intended and unintended impacts), and a thorough assessment of the model's operational requirements. This framework ensures that projects are scrutinized not just for innovation but also for their adherence to ethical standards.

Applied Training and Workshops: Bridging Theory and Practice

In the weeks leading up to the hackathon, participants undergo applied training through workshops tailored to their specific AI use cases. These sessions are vital for developing the necessary governance artifacts and understanding how to assess ethics and model risk comprehensively. By inviting diverse, multi-disciplinary teams to these workshops, agencies can nurture a holistic approach to AI development, embedding ethics and governance considerations from the outset.

Presentation Day: Showcasing Ethical AI Innovation

The culmination of the hackathon is a day of presentations, where teams showcase their projects, emphasizing how they’ve addressed and plan to mitigate various risks associated with their AI applications. A panel of judges with expertise in domain-specific areas, DEI, regulatory compliance, and cybersecurity rigorously evaluates each project. This not only fosters a competitive spirit but also ensures that projects are assessed from multiple perspectives, highlighting the importance of ethical considerations in AI development.

Beyond Hackathons: Cultivating Continuous Learning and AI Literacy

While hackathons are a powerful tool for sparking innovation and imparting governance knowledge, they are just the beginning. For government agencies to truly excel in AI development, there must be an ongoing commitment to building a culture of AI literacy and ethical consideration. Continuous education, discarding outdated assumptions, and adapting to new ethical guidelines are essential steps in ensuring that AI technologies serve the public good effectively and responsibly. 

Offering applied training to AI practitioners and those procuring AI offers government agencies a unique opportunity to marry innovation with ethical governance. By fostering an environment that emphasizes practical training, accountability, and multidisciplinary collaboration, agencies can pave the way for responsible AI applications that align with societal values and governance standards, ensuring that public service delivery is enhanced in an ethical, transparent, and equitable manner. 

This kind of training can give people an understanding of the functional and non-functional requirements one would want to see in AI models that would operationalize the human values that we would want to see reflected in AI.

This kind of training would also serve to help people better create more thoughtful and critical metadata that agencies would need to capture as part of their inventory process in order to assess the risks of models before the models are in the wild.

A Call for Collaborative Innovation

The journey towards responsible AI in government is not a solitary endeavor. It requires a collective effort that spans government agencies, non-governmental bodies, academia, and the private sector. By creating multidisciplinary centers of excellence (as those being stood up in the states of California, New York and Texas) and promoting a culture of shared responsibility, government agencies can lead by example in the ethical development and deployment of AI technologies.

Conclusion

As government agencies embark on the journey of AI integration, the focus must remain steadfast on ethical governance, human-centered design, and collaborative innovation.  We must connect the way that trustworthy AI development is taught to software engineers more closely to the ways that trustworthy AI is experienced, expected and anticipated by those who use it in everyday life. By embracing these principles and practices, governmental bodies can harness the transformative potential of AI to enhance public service delivery, ensuring that technological advancements align with societal values and ethical standards.

 

Image by macrovector on Freepik