July 2024 Public Sector AI Roundup
I'll be honest: I have been feeling a little exhausted about AI.
I am tired of AI being used for weapons and surveillance, or to sell things that people do not want. I am feeling, more and more, like I don't know where I personally stand, between the AI boomers and the doomers. Just the other day, I tried to help a friend not in tech sign up for business email and: every single cloud email provider had walls of 'sign up for our AI stuff!' advertising that was not only not useful, it was also anxiety-inducing and distracting from the actual work of.. signing up for an email service.
Still, I think it's essential that we keep track of what people are trying to do with AI in the public sector, which should — I believe — be held to a different standard. And as a reminder, when we speak of AI here, it's not just generative AI: we're also interested in other forms of it, and machine learning as well.
The big news this season is that Goldman Sachs thinks for all the gen AI hype, there's been little to show for it. That's been nothing short of explosive, but not something we didn't know. Benedict Evans has also cautioned that for what it's worth, there doesn't appear to be product-market fit.
Ben Thompson's analysis of Apple Intelligence: that prudent, on-device artificial intelligence may be the future, is also worth a read.
News
Where 'news' is 'stuff that happened' and 'reading list' is 'stuff you should read about stuff that happened'.
- U.S., U.K. Announce Partnership to Safety Test AI Models
- Huge AI funding leads to hype and ‘grifting’, warns DeepMind’s Demis Hassabis: The surge of money flooding into artificial intelligence has resulted in some crypto-like hype that is obscuring the incredible scientific progress in the field, according to Demis Hassabis, co-founder of DeepMind.
- OpenAI deems its voice cloning tool too risky for general release: Delaying the Voice Engine technology rollout minimises the potential for misinformation in an important global election year
- Washington’s Lottery halts AI app after user says it generated a topless photo
- AI is causing massive hiring discrimination based on disability (Archive link)
- Trudeau announces $2.4 billion for AI-related investments (Archive link)
- An AI-powered tool to automate the processing of public consultations in the UK: From the project page, "A consultation attracting 30,000 responses requires a team of around 25 analysts for 3 months to analyse the data and write the report. [..] If we can apply automation in a way that is fair, effective and accountable, we could save most of that £80m." From a LinkedIn comment: "What would you use it for?"
- Oracle’s Larry Ellison thinks every government will want to build a ‘sovereign’ AI cloud in the future (Archived link): of course he does. We always find it useful to look at the things hyped by industry CEOs, not because they're always right, but because they provide useful insights into the types of marketing materials we might soon receive.
- San Jose forms a nationwide coalition: A coalition of more than 500 officials at 200 local, county and state governments across the United States are banding together to promote the responsible use of artificial intelligence, spearheaded by technology leaders from San Jose.
- Colorado Chief Data Officer Amy Bhikha noted exponential growth and potential: but also that "at the same time, there are “not enough people to do anything meaningful with all that data."
Reading List
- Panel discussion at Boao Forum in Hainan hears there is belief in the industry that China is lagging behind on generative AI: China’s artificial intelligence firms need to focus on developing their own hardware and software if they want to catch up with US market leaders, industry leaders have said.
- Why we’re fighting to make sure labor unions have a voice in how AI is implemented: To be clear, the labor movement is not anti-tech. It’s pro-worker.
- Your AI Product Needs Evals
- Release of Artificial Intelligence and Democratic Values Index report by Centre of AI and Digital Policy. Updated AI Index Ranks Policies and Practices in 80 Countries - Canada, Japan, Korea, and Colombia, Slovenia and the Netherlands Rank at Top.
- Knowing Machines: this excellent project teaches you how AI models use large training sets, and what the problems with current training sets are. To give you an idea of the scale: "If your full-time, eight-hours-a-day, five-days-a-week job were to look at each image in the dataset for just one second, it would take you 781 years."
- A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
- Sure, No One Knows What Happens Next. But Past Is Prologue When It Comes To AI: "I am not saying don’t innovate. I am not saying don’t dream and experiment and pilot new approaches to stay current with developments. What I am saying is be clear-eyed about the fact that LLMs and the products they power (like ChatGPT and Bard) are an emerging and experimental technology. This is not a critique, it’s an empirical reality, and one that early adopters must take to heart — especially those working in the public trust, caregiving, and high-stakes scenarios."
- The governance of artificial intelligence in Canada: Findings and opportunities from a review of 84 AI governance initiatives
- “AI” won’t solve accessibility: "In our tech-focused society, there is this ever present notion that “accessibility will be solved by some technology”. But it won’t. Making things accessible is a fundamentally human challenge that needs human solutions in human contexts"
- Whose job is it to protect Black people from AI? (Archived link)
- How LLMs work, explained without math: "In this article, I'm going to attempt to explain in simple terms and without using advanced math how generative text models work, to help you think about them as computer algorithms and not as magic."
- AI Governance Needs Sociotechnical Expertise: "Because real-world uses of AI are always embedded within larger social institutions and power dynamics, technical assessments alone are insufficient to govern AI. Technical design, social practices and cultural norms, the context a system is integrated in, and who designed and operates it all impact the performance, failure, benefits, and harms of an AI system. This means that successful AI governance requires expertise in the sociotechnical nature of AI systems."
- Readout of Justice Department’s Interagency Convening on Advancing Equity in Artificial Intelligence: Agencies discussed their efforts to safeguard civil rights through robust enforcement, policy initiatives, rulemaking and ongoing education and outreach, and noted accomplishments.
Of Interest
- Code for America is launching AI Studio, a series of in-person, and online, workshops in the fall. Find out more in the announcement.
- Through this newsletter, I got to know more about the UK-based folks at Newspeak House as well as Apolitical. Quick shoutout to Newspeak House for their always interesting events. Check out their fellowship program in London, too.
- Apolitical launched an online course with Google.org: Understand How AI Impacts You and Your Government
- As usual, Simon Willison has a great presentation on how he's been thinking about AI. Slides and transcript from his presentation at PyconUS here. More than anyone else, Simon's work helps me see how he uses AI tools and tricks to help him do things better. For me, that's far more exciting and useful than any amount of magic beans a chat-based LLM can spit up.
Send me news, information, thoughts! People I should meet or speak with to feel less depressed about the state of tech, and the world! On the personal front, I've stepped down from my role in municipal government; and will be sharing more about what's next, soon.