Building Public Trust in AI: How Government Agencies Can Enhance Service Delivery with Transparency and Ethics
On the topic of artificial intelligence (AI) and human resources (HR), Zoe McBride, Director of Human Resources for Microsoft Australia and New Zealand, highlights that this is an exciting time for HR leaders as AI can transform HR functions by making them more efficient and supportive of people. She believes HR is uniquely positioned to help organizations and their employees leverage AI for more meaningful work. An example is Microsoft’s AskHR Virtual Assistant, which has saved the company 21,000 hours by handling simple queries, allowing HR staff to focus on more strategic tasks. According to Microsoft’s Work Trend Index, employees using AI feel it makes their workload more manageable (92%), enhances creativity (92%), helps them focus on key tasks (93%), boosts motivation (91%), and increases job satisfaction (91%). Zoe also notes that AI enables employees to use their talents and creativity more effectively, even as processes become automated.
AI can provide immense value to government agencies beyond the HR department by enhancing efficiency, decision-making, and service delivery across various functions.
Opportunities for AI in Government Agencies
AI offers government agencies opportunities to increase operational efficiency, enhance citizen engagement, and deliver better public services—all while empowering employees to focus on higher-value, impactful tasks.
Streamlining Administrative Tasks
Just as Microsoft’s AskHR Virtual Assistant reduced time spent on routine queries, AI can automate repetitive tasks in government agencies. Virtual assistants can handle citizen inquiries, process applications, and manage document submissions, freeing up employees to focus on complex and strategic initiatives.
Improving Public Services
AI tools can help government agencies deliver more efficient services faster to citizens. For instance, AI can triage public inquiries, automate benefit eligibility determinations, and assist in scheduling public services. This reduces the workload on frontline staff and enhances the citizen experience by providing quick, accurate responses.
Making Data-Driven Decisions
AI-powered data analysis tools can process large amounts of data from multiple sources, providing government leaders with insights that guide policy decisions, resource allocation, and program improvements. This helps agencies respond more effectively to community needs, reduce inefficiencies, and make better-informed decisions.
Enhancing Public Safety and Security
AI systems can assist in predicting and preventing potential security threats by analyzing patterns and trends. In fields such as law enforcement, emergency response, and public health, AI can improve response times, predict outcomes, and enhance coordination.
Boosting Employee Productivity
Just as AI helps HR professionals focus on more strategic work, it can also assist employees in various departments by automating low-value tasks, analyzing data, and providing tools that support decision-making. This enhances productivity, fosters creativity, and encourages innovation.
Personalizing Citizen Services
AI can analyze data to personalize services for citizens based on their needs and preferences, improving the delivery of healthcare, social services, and education. It can help agencies identify underserved populations, target interventions, and improve service outcomes.
How Government Agencies Can Build Public Trust in AI
AI is becoming increasingly integrated into public services, transforming how government agencies operate. From improving service delivery to automating complex processes, AI holds immense potential to enhance efficiency. However, AI’s success in the public sector hinges on a crucial factor—trust.
Building trust in AI requires more than just implementing the technology; it demands, what GovTech writer Ben Miller calls TEA, transparency, explainability, and auditability. These three key pillars ensure the public can rely on the decisions that AI makes. Read on to discover how government agencies can leverage these components to build trust in AI and foster confidence in its use within public services.
Transparency—Making AI Visible
Transparency is the foundation of trust. When AI is used in public services, individuals have the right to know when and how it’s being applied. Transparency builds trust by showing people that AI is being implemented responsibly and ethically.
Government agencies can
Clearly communicate ai usage: Whether AI is used in decision-making processes, eligibility determinations, or fraud detection, agencies must inform the public when AI is involved.
Provide “model cards” or “fact sheets”: These tools outline important details about AI models, such as where they pull data from, what algorithms are used, and their performance metrics. Offering these resources ensures the public understands the basics of AI operations.
An example of this would be a virtual assistant used to answer questions about SNAP benefits. A model card could explain the types of data the assistant uses, its success rate in providing accurate information, and how its accuracy is regularly audited.
Explainability—Clarifying AI Decisions
Even with transparency, the public must understand how AI arrives at its conclusions. Explainability allows both the people using AI and those affected by it to see the logic behind its outputs.
Government agencies can
Provide clear narratives and statistics: Instead of simply showing results, agencies should explain the factors that influenced an AI’s decision, particularly in cases where decisions impact individuals’ lives.
Use tools to illustrate algorithms: Visual aids, reports, and narrative explanations can help clarify how AI models work. This is especially important in sensitive areas like healthcare or social services, where individuals need to trust that AI is weighing factors fairly and accurately.
For instance, if a virtual assistant answers questions on SNAP, explainability could involve showing workers why they received the response, detailing where the information was pulled from.
Auditability—Ensuring AI Accountability
Auditability ensures that AI systems are designed to be monitored for success, failures, biases, and accuracy. Without this, the public could lose faith in AI tools, particularly if issues such as bias or inaccuracies arise.
Government agencies can
Allow audits of AI performance: Opening the AI systems to third-party audits to track indicators of success, such as fairness, bias, and accuracy creates an additional layer of accountability.
Ensure traceability: AI systems should offer a way to trace an output back to the data and processes that led to it. This is especially crucial in highly regulated fields like healthcare, where AI decisions need to be fully documented and traceable.
In social service applications—such as AI systems providing recommendations for Medicaid eligibility—to be reviewed by caseworkers, auditability could mean regularly testing the system for biases and ensuring that outputs can be traced back to the specific inputs used for decision-making.
Trust Through TEA and Increased Credibility
The solution is for government agencies to align their efforts in AI with a broader strategy of trust-building. TEA empowers the public to scrutinize, understand, and assess the effectiveness of AI systems, but it must be part of a larger effort to restore faith in public institutions.
To build credibility, government agencies can
Be proactive in public engagement: Engage with communities and stakeholders to explain AI initiatives and solicit feedback. This builds a relationship based on openness and accountability.
Showcase successful use cases: Highlight real-world examples where AI has improved public services without sacrificing fairness or accuracy.
Improve overall governance: Beyond AI, government agencies must focus on maintaining or improving their overall operations, addressing concerns like inefficiency and miscommunication.
Beyond TEA: Trust in Government
While transparency, explainability, and auditability are vital to building trust in AI, they are not a cure-all. As Guy Pearce, a digital transformation expert, points out, government agencies face a broader trust issue. According to the 2024 Edelman Trust Barometer, public trust in government is relatively low compared to trust in business and non-governmental organizations. In the U.S., this gap is particularly pronounced.
These components are seen in President Biden’s 2023 executive order on AI, which pushed the federal government to review and report on its algorithms. New York City’s AI Action Plan urged municipal leaders to prioritize transparency and explainability when evaluating and acquiring AI tools.
If the government itself is not viewed as trustworthy, efforts to enhance AI transparency, explainability, and auditability may be met with skepticism. For TEA to be effective, the government must first establish credibility. Without trust in the institution, people may view even the most transparent AI initiatives with suspicion.
Conclusion
AI can revolutionize public services, but only if the public trusts its implementation. For government agencies to build this trust, they must embrace transparency, explainability, and auditability while also addressing the broader challenges of public perception. By combining AI’s potential with a genuine commitment to ethical, transparent governance, agencies can enhance service delivery and rebuild the public’s confidence in both technology and government.
Comments