March 2024 Public Sector AI Roundup
A welcome note
Welcome to the first edition of AI in the public sector. We bring you the latest news about artificial intelligence in government, in the United States as well as in other parts of the world.
We are public sector employees curious about developments in artificial intelligence, but also concerned about implications on ethics and security. We hope to share resources to educate other public sector employees, help them see past the hype, so that we can evaluate these tools, call out vendors, and make sure that we engage with AI, if at all, in meaningful and useful ways. Most of all, we aim to help you develop AI literacy and awareness so we can prevent harm and continually center the people we serve.
We're in the process of building out publicsectorai.tech, which, when completed will have links to resources and reading lists. In the meantime, here's our newsletter. Thank you for reading.
Latest AI news in the public sector
Where latest means 'latest' to us, as we catch up with the news.
There's no science to the emojis, they are just a response to whether we are feeling pensive or optimistic about each story. AI developments that lean towards surveillance and weapons, those that do not consider bias, data quality, fairness and ethics will tend to earn three sad emojis.
- π π π Revealed: a California city is training AI to spot homeless encampments: San Jose city employees are "are driving a single camera-equipped vehicle through sections of district 10 "every couple weeks" [..] If the City were to productionize this technology, we envision the cameras to be on our fleet motor pool vehicles that regularly drive throughout city limits.β (Using AI to spot homeless encampments without fixing the root problem, which is lack of affordable housing, raises concerns about "what is this for" and also "surveillance!")
- π π π Treasury Announces Enhanced Fraud Detection Process Using AI Recovers $375M in Fiscal Year 2023. Related, the Treasury is also concerned about an increase in AI-abetted fraud. "The U.S. Treasury Department has issued a report on the growing risks to banks, other financial organizations, and their business and consumer clients from criminals using artificial intelligence (AI) applications in fraud. The good news within that otherwise dire warning, however, is increased data sharing will greatly reduce that threat, which may also generate new business activities for responsive entrepreneurs." (link to PDF here).
- π π π Starting this spring, United States Digital Response will welcome a cohort of Google.org Fellows for six months
- π π π Western and Chinese scientists met in Beijing last week to identify 'red lines' on AI, including on the making of bioweapons and launching of cyberattacks. Read the Consensus Statement here. On Autonomous Replication or Improvement: No AI system should be able to copy or improve itself without explicit human approval and assistance. This includes both exact copies of itself as well as creating new AI systems of similar or greater abilities
- π π π VP Harris says US agencies must show their AI tools aren't harming people's safety or rights
- π π π Related, The White House issued new rules on how government can use AI. Here's what they do, and guidance for agencies through Office of Management and Budget (OMB)
- π π π NYC AI Chatbot Touted by Adams Tells Businesses to Break the Law
- π π π A pair of stories to read together: New York Will Test Gun-Detecting Technology in Subway System, Mayor Says, and Shareholders Sue AI Weapon-Detecting Company, Allege It 'Does Not Reliably Detect Knives or Guns'
- We Tested AI Censorship: Hereβs What Chatbots Wonβt Tell You: "according to Micah Hill-Smith, founder of AI research firm Artificial Analysis [..] the "censorship" that we identified comes from a late stage in training AI models called "reinforcement learning from human feedback" or RLHF
- π π π Especially relevant for public sector employees: what the venture class is thinking about, when it comes to AI in government procurement. This is basically a roadmap for how they are planning to push goods and services into our industry, so well worth reading
- π π π What it costs to poison the well: A group of AI researchers recently found that for as little as $60, a malicious actor could tamper with the datasets ChatGPT rely on to provide accurate answers (Quality of data, and inherent bias in data, are things that those of us in the public sector must pay extra attention to)
- π π π AI could help automate around 84% of repetitive service transactions across government: "According to researchers at the Turing, the UK central government carries out around one billion citizen-facing transactions per year spanning across almost 400 services - including passport applications and universal credit processes β and 57 departments. [..] They estimated that these services were made up of around 143 million complex but repetitive transactions, giving them a high potential for automation by AI. And they believe that 84% of these transactions could be easily automated"Β (Looking forward to more discussion, but human guardrails and recourse still required. Read Virginia Eubank's essential book. "Automating Inequality" for real life case studies and guidance from pre-AI times, and this paper 'Computer says no' that spells out the horrors of the Australian Centrelink debacle)
- π π π Nava PBC, a vendor that works with local, state and federal governments on digital services, recently hosted an event discussing a partnership with the Gate Foundation and the Benefits Data Trust that "that seeks to answer if generative and predictive AI can be used ethically to help reduce administrative burdens for benefits navigators. If successful, we hope to scale our learnings to streamline how families with young children and other vulnerable populations can access benefits and services"
- π π π The White House is ordering all federal agencies to name chief artificial intelligence officers to oversee the federal government's various approaches to AI and manage the risks that the rapidly evolving technologies might pose. (Questions remain about qualifications and training required to fill such roles)
- π π π The [National Telecommunications and Information Administration], part of the United States Department of Commerce, has issued an AI Accountability report: there are detailed recommendations across three areas, Guidance, Support, and Regulation. (This calls to mind earlier guidance in the seminal paper, "Datasheets for Datasets", where Gebru and several others urge for greater understanding of data provenance in machine learning. Just like nutrition labels for data sets. Read PDF here.)
- π π π The hot new government job is AI specialist
AI Moves
- The Adams administration quietly hired its first AI czar: congrats to Jiahao Chen, a friend of this newsletter, for his appointment
- Announcement of the 2024 Cohort of U.S. Science Envoys: Dr. Rumman Chowdhury, data scientist and social scientist, CEO of Humane Intelligence, Responsible AI Fellow at Harvard Universityβs Berkman Klein Center for Internet and Society, has been appointed to serve as the US Science Envoy for Artificial Intelligence
- Carrie Bishop appointed new GenAI Partner Lead at United States Digital Response: significant, as Bishop was the first Chief Digital Services Officer at San Francisco Digital Services
Reading list
- How should government guide the use of generative AI?
- If you invest more in AI than in people, you're doing it wrong
- In a survey of mayors through Bloomberg Philantrophies, 96% of mayors expressed interest in generative AI (PDF file). Their biggest questions center around the technologyβs implementation, impact on city services and efficiency, and its ethical, legal, and social implications
- All in on AI: the federal government is leaning on its βunique value propositionβ to entice AI and AI-adjacent IT pros to bring their skills to public service
- Some essential reading on the use of generative AI in the recent Indonesian elections by Michelle Kurilla
- Also at CFR, a discussion with Kat Duffy about the opportunities and challenges facing governance on AI
- Simon Willison, creator of the ever-useful Datasette project and one of the people I look to the most for 'useful things to try with AI', shows how he uses Claude and ChatGPT to learn and automate some mapping tasks. "Could I have done this without LLM assistance? Yes, but not nearly as quickly. And this was not a task on my critical path for the dayβit was a sidequest at best" (this has also been an area where we've personally found AI tools to be useful)
- The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychicβs con
- Some concerns around using AI in the mental health space
- Startup gets $50M to build an AI that understands how you're feeling
- Hacking internal AI chatbots with ASCII art is a security teamβs worst nightmare
- A Good AI Program must start with good data
- How Cities use the Power of Public Procurement for Responsible AI (Related: we were involved in some earlier discussions leading to this IEEE resource on procurement. Read a transcript from a related discussion here.)
Resources
Links in this category via the ever resourceful Zeldman. Hit reply and send us more resources that may be useful to public sector employees. Vendor marketing materials will be ignored.
- Elements of AI: a series of free online courses created by MinnaLearn and the University of Helsinki. We want to encourage as broad a group of people as possible to learn what AI is, what can (and canβt) be done with AI, and how to start creating AI methods. The courses combine theory with practical exercises and can be completed at your own pace.
- What is AI Literacy? We define AI literacy as a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.
Are you a public sector employee? What's happening in your world with AI? Are you feeling concerned, excited, cautiously optimistic, mostly pessimistic? Do you feel you have the skills or access to training needed to work with AI in the near future? Who leads AI in your teams? Let me know if this was useful, and what you hope to see in future editions.