top of page

Balancing Opinions on AI in Government: Analysis and Case Study Insights



AI is poised to revolutionize how we work and learn for three key reasons. First, early studies in September 2023 demonstrated significant productivity boosts, with AI users achieving a time savings of over 30 percent and higher-quality outputs. Additionally, the exceptional performance of OpenAI’s Generative Pre-trained Transformer 4 (GPT-4), which is a multimodal large language model software, underscores the growing prevalence of AI among students and workers, despite some keeping it under wraps.

Secondly, AI is affecting a demographic of workers previously untouched by automation. Research indicates that highly educated, well-compensated individuals in creative roles are most susceptible to AI-driven changes. As organizations increasingly adopt AI for productivity gains, the pressure to embrace these technologies will be immense.

Finally, major tech players like Microsoft and Google are integrating AI tools into their flagship office applications, signaling the inevitable integration of AI into our work environments.

Now, AI’s reach could extend into the realm of government. Here are two perspectives on the potential effect of AI in the government sector.

Perspective: The Washington Post

When Joe Biden delivered his State of the Union address on March 7, 2024, he became the first president to discuss artificial intelligence in this setting. His call was to “Harness the promise of AI and protect us from its peril.” His executive order on AI has already driven work to harness and protect as the order emphasizes AI’s safe and trustworthy development. However, Josh Tyrangiel, The Washington Post’s AI columnist, argues that Biden’s vision for AI’s potential remains limited. Although acknowledging AI’s role in generating misinformation and threatening national security, Biden and his political rivals have yet to grasp its transformative potential for government itself.

Public confidence in institutions like healthcare, education, and regulation face historic lows, according to a 2023 Gallup poll. This crisis of legitimacy highlights a fundamental lack of trust in government’s ability to address societal challenges. The article, “Let AI remake the whole U.S. government (oh, and save the country),” argues that, if used properly, AI can address this crisis of legitimacy.

In 2023, the IRS could answer only 29 percent of its phone calls during tax season, and human-based decisions for programs like SNAP had a 44 percent error rate. Large language model-powered chatbots could vastly improve government service, offering around-the-clock assistance in multiple languages at lower costs. This capability extends to other federal programs like veterans’ benefits, student loans, and Medicare.

To conclude the article, the author offered the following recommendations: The government must acknowledge the fractured relationship between citizens and government and the urgent need to break free from the status quo. Embracing new technology, particularly AI, offers an opportunity to enhance efficiency and service to levels that rival global standards. Most AI innovations originate from American companies, and citizens deserve to reap the benefits.

Perspective: An Action Plan to Increase the Safety and Security of Advanced AI

A report commissioned by the U.S. government and authored by Gladstone AI highlights a different perspective on AI, one that argues the urgent need to address the national security risks posed by AI. Gladstone AI’s mission is to ensure AI is developed responsibly and safely, protecting against risks like weaponization and loss of control. Their purpose is to help organizations make smart choices on AI policy, strategy, and risk management. The report’s authors and company co-founders, Edouard Harris, Jeremie Harris, and Mark Beall, are focused on the risk AI of becoming an “extinction-level threat to the human species.” This is a far more apocalyptic take than the Washington Post article, “Let AI remake the whole U.S. government (oh, and save the country),” takes on the inclusion of AI in the government. The final report, obtained by Time magazine, was 247 pages and was delivered to the State Department on February 26, 2024, recommends investing in education for officials on AI’s technical complexities to mitigate risks.

Gladstone AI’s report highlights the escalating national security risks posed by current frontier AI development. It compares the potential impact of advanced AI and AGI (artificial general intelligence) to the destabilizing effect of nuclear weapons. AGI, though still hypothetical, could surpass human capabilities, and while not yet realized, leading AI labs are actively pursuing its development, with expectations of its arrival within the next five years or less.

The document, titled “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI,” proposes sweeping policy actions that would significantly disrupt the AI industry. The report’s authors provided recommendations, and it does not reflect the views of the United States Department of State or the United States Government. It suggests that Congress should mandate an interim set of responsible AI develop and adoption safeguards for advanced AI systems and their developers, determined by a new federal AI agency. This threshold could mirror the capabilities of cutting-edge models like GPT-4 and Google’s Gemini. Additionally, AI companies operating at the forefront should obtain government approval to train and deploy new models above a specified threshold. This threshold is defined by the AI model’s total training compute. However, as AI advances, the report’s authors recommend expanding these thresholds based on “concrete national security considerations.” National security considerations include timeframes required for planners to identify and address different threat scenarios, as well as the time it would take for  adversaries to develop AI capabilities that could activate these threat scenarios. The report advocates outlawing the publication of inner workings of powerful AI models, with violations potentially resulting in jail time. Furthermore, it recommends stricter controls on AI chip manufacturing and exports, along with increased federal funding for “artificial general intelligence (AGI)-scalable alignment.” The goal of alignment is to integrate human values and goals into large language models to enhance AI safety.

Risks

The report identifies two main risks associated with advanced AI. The first, termed “weaponization risk,” involves the potential for AI systems to be used in catastrophic attacks, including biological, chemical, or cyber assaults, as well as in swarm robotics applications. The second risk, labeled “loss of control,” pertains to the possibility that advanced AI may surpass human control and act adversarial.

Both risks are compounded by “race dynamics” in the AI industry, where companies prioritize speed over safety to gain economic advantages. The report notes that the first company to achieve AGI stands to reap significant rewards, leading to a lack of immediate incentives for investing in safety and security measures.

Gladstone co-founder and CEO Jeremie Harris said on the subject, “Move fast and break things, we love that philosophy, we grew up with that philosophy,” he told Time magazine. But Jeremie argues that this philosophy is not applicable when the downsides of “break things” is massive. He continues, “Our default trajectory right now seems very much on course to create systems that are powerful enough that they either can be weaponized catastrophically or fail to be controlled.” The Harris brothers’ credentials include running an AI company that participated in YCombinator, a renowned Silicon Valley incubator. The Harris brothers and Gladstone, seek to prevent, “One of the worst-case scenarios.” They fear that once “you get a catastrophic event that completely shuts down AI research for everybody, and we don’t get to reap the incredible benefits of this technology.”

Concerns

Both perspectives outline concerns related to AI in the government:

·       Potential biases in AI algorithms

·       Lack of transparency in AI decision-making processes

·       Job displacement due to AI adoption

·       Susceptibility to data breaches in AI systems

·       Skepticism about the overall impact of AI on governance and public trust

·       AI advancement beyond human intelligence and control

·       Potential weaponization of AI

Although all the excitement and concerns surrounding AI are valid, some concerns can easily be addressed. As discussed in the TIME’s article, “We’re Focusing on the Wrong Kind of AI Apocalypse,” AI, when used correctly, can bring about localized successes, transforming mundane tasks into productive endeavors and empowering individuals.

The potential of job displacement is the concern of many, as managers may instinctively opt to cut costs by laying off employees in the face of efficiency gains. However, this approach is not necessary nor advisable. There are compelling reasons for companies to resist turning productivity improvements into staff reductions. Those who leverage their more efficient workforce stand to outperform competitors who maintain pre-AI output levels with fewer employees.

Although some may resort to using AI for surveillance and layoffs, or educators may inadvertently leave students behind with AI integration, these are foreseeable challenges. Yet, AI can be a transformative force for good. When employed thoughtfully, it can revolutionize mundane tasks and empower individuals. Ultimately, AI-driven productivity gains have the potential to fuel growth and foster innovation.

Worthy of further consideration is the impact AI has already had in the government sector during the COVID‑19 pandemic.

Case Study: Operation Warp Speed and the Role of Palantir

Army General Gustave Perna, a pivotal figure in the production and distribution of the first coronavirus vaccines, found himself thrust into a challenging situation in May 2020. Despite being on the cusp of retirement, he was summoned to spearhead Operation Warp Speed by the chairman of the Joint Chiefs. With minimal resources—just three colonels, no funding, and no clear strategy—Perna faced an uphill battle.

Perna understood that success hinged on having comprehensive, real-time data—analogous to “seeing himself” on the battlefield. He needed to synchronize information from various state and federal agencies, pharmaceutical companies, hospitals, and logistical partners. This data had to be standardized and actionable for rapid decision-making.

For simplicity’s sake: What Perna needed was a crucial material, plastic. Plastic is vital for vaccine-related supplies. Without Perna’s understanding of the national plastic production capacity, the endeavor risked failure, leaving millions of vaccine doses unusable.

Enter Palantir, a company known for its software that enables real-time, AI-drive decision making. Despite ideological reservations, Perna entrusted Palantir to deliver on its promises. Leveraging artificial intelligence, Palantir streamlined thousands of data sources into a user-friendly interface. In a matter of weeks, Perna gained a comprehensive overview of the operation—a “God view.” Within months, Operation Warp Speed successfully distributed vaccines across all 50 states simultaneously. Through the partnership with Palantir, Perna exemplified how effective use of technology and data integration can overcome formidable challenges, even amid skepticism and political tensions.

Palantir’s role in standardizing organizational data and developing user-friendly interfaces empowers individuals across various levels, from middle managers to high-ranking officials, to use AI effectively. This integration of data and applications enables users to gain valuable insights and achieve a comprehensive perspective. It’s worth considering what Palantir and the frontier of AI can do to revolutionize and increase efficiency in the government.

In June 2024, the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) awarded Palantir a contract to develop a data-sharing ecosystem called Open DAGIR (Open Data and Applications Government-owned Interoperable Repositories). This tool is part of the Pentagon’s initiative to connect systems across the department, enabling scalable use of data, analytics, and AI through enhanced collaboration with private sector partners. In August of 2024, Microsoft teamed up with Palantir Technologies to integrate Palantir's AI services with Microsoft's Azure cloud for classified U.S. defense applications. Now, in November of 2024, Amazon is following suit.

Amazon Web Services (AWS) is collaborating with Palantir to enhance AI capabilities for the U.S. military. This partnership involves integrating Anthropic's Claude large language model (LLM) with Palantir’s AI Platform (AIP) to support defense and allied agencies. The defense sector, like private businesses, relies on data-driven insights for budgeting, logistics, and strategic decisions.

Conclusion

Although the media details apocalypses small and large, the cause for fear of AI is widely debated. Expanding this technology could revolutionize entire government agencies and functions, from tax collection to disaster relief. However, skeptics argue that though crisis situations like Operation Warp Speed demonstrate government’s ability to act swiftly, bureaucratic hurdles hinder large-scale implementation during normal times.

Ultimately, we must weigh the risks and benefits of the implementation of AI in the government and decide which vision to place our trust in—the potential for utopia or an apocalypse. The issue of AI is ever-evolving and covered expansively.

Curious about AI and automation? Check out TipCo Automated Systems' podcast, The Roundup, for insights and answers to all your AI questions. Listen in and stay ahead!


 


9 views0 comments

Commentaires


bottom of page