Public Sector AI? Try AI in the Public Interest
Public Sector AI newsletter is back. This time with hot takes about the future of public interest and AI, and taking a stand against extractive technology.
The last few months have been, to put it lightly, hectic.
My personal journey through the public sector has come to an end, but not completely: instead of working in public service, I now work at... Monterey Bay Aquarium (as its director of digital product management)! That's right, I went from fintech (where I was a fintech startup founder a decade ago to) to fin tech. I am also fin with Big Tech.
I can no longer in good faith engage in surveillance technology or technology that is used for weapons and killing. That should not be radical.
We've also experienced the mass culling of the public sector in past weeks. Federal government employees have been unceremoniously (and cruelly) removed. The AI federal advisory committee I was on has been canceled, along with many other such committees.
'Responsible' AI now feels like a buzz word from the past. All bets are off, it seems, in many parts of the world.
It's not all bleak, at least not on a personal level. When I left City service in June 2024, I was at a crossroads where I had to decide what to do next. Those of you who have been in government know that it can be difficult to do $dayjob and something else while in government, due to conflict of interest and ethics.
I immediately immersed myself into a world of things that interested me, that I couldn't really do before.
- Got involved in AI policy and advocacy. I joined the CAIDP research group and completed the AI policy clinic. Later this year, I will apply to the Advanced policy clinic, too. I continue to be involved with CAIDP in its California advocacy, with a special interest in mitigating harms from data center energy and water use.
- Started working on AI red teaming. I work with Humane Intelligence as its red team lead, and I have been involved in many interesting projects that have led to reports for the Department of Defense, Singapore AI regulators, and others. My proudest moment in this space was in launching the groundbreaking report on multilingual red teaming in Asia Pacific. You can download the executive summary, evaluation report, or read more about the work here.
- Wrote a retrospective about my work in city government. Writing in general is going to be a key feature of my work this year. I am also working on a book!
- Sought collaboration in many areas. I have been working with professors, students and other AI researchers, especially in the field of AI safety. Some collaboration is pretty early and exploratory, but others have been immediately meaningful. I'm looking forward to sharing more.
Recent developments in global AI collaboration and safety have also led me to conclude that AI safety is going to splinter into national domains, or become less relevant. How to do AI safety, when it's perceived as hand-wringing?
There are plenty other newsletters that will orient you on 'what AI to use'; this is not one of them. AI may develop at rapid clip. As an insider who is interested in what AI can do, what it can't, who is wholly disinterested in the business of AI snake oil, I've found myself thinking about the analogy to colonialism:
Academic AI/ML is a bit like 'hey, we found some spices! It might be fun to flavor our bland food!' To me, that's stuff like: using AI techniques to read a papyrus that can't be unfurled. Or like what my new employer does to identify marine organisms in the deep sea.
The business of gen AI, however, feels to me like the VOC and EIC being set up to exploit the colonies, with great atrocities to people along the way in the name of extractive capitalism. In hindsight, probably not surprising that I feel a cultural turning point for me with Big Tech was when I first started hearing 'colonialism fine, actually', from the people in the game.
In the next newsletter, I will share my thoughts on how I plan to turn this from 'AI in the Public Sector' to 'The Public Interest in AI'. It's time to take a stand for the public interest, now more than ever. If AI is going to impact it, I want to be here to document how, when and where.
It's time to bear witness, move the conversation away from 'Responsible AI' to 'Is it responsible? And necessary?' and towards 'who is harmed, who benefits from any of this?'
Like I have been saying: for the first time in almost two decades, I finally feel like my training in political science and software development is converging. I'd like to help you make sense of it from the inside, and out.