A few wild predictions about AI in government
Adrianna Tan writes a few predictions about how AI will impact government and government jobs.
I have never understood why so many technologists in government calls themselves 'civic tech', instead of 'government employees'. Maybe it's to be more inclusive also of tech folks who work in public sector software contracting.
I still prefer software in government, as a term. There's no shame in being a government, or govenrment- adjacent employee.
Contrast this with the folks from the UK or Singapore, where Governement centers more transparently in their team names and job titles. 'Government Digital Services'. 'GovTech'.
Words have power, and how we choose to call something also has power. At this point, in mid 2024, I feel that many observers of AI / government are dancing around the thesis, "AI is going to bring massive change to government."
Yeah? How so? Who will bring it? What is going to happen exactly? What are the benefits? What does it even.. mean? What AI are you even talking about? GenAI? What about the other stuff?
Now that I am not in government anymore, I feel I can speak a little more freely. So here's a bunch of wild predictions. I'd love to hear what you think, and what you think might be accurate, or totally off base.
AI jobs in government are going to have to fit strictly within existing job categories
At least here in the United States. And from there, we can extrapolate how AI will impact careers, career paths and growth, and perhaps even how teams and projects will be funded in the future.
Focusing only on the software aspects of Ai in government, my broad prediction and analysis (supported by a brief look at all jobs with 'artificial intelligence' or 'AI' in their job description or title over at usajobs.gov, is that AI jobs will coalesce into the following groups. These provide an AI- parallel to existing career tracks.
C-level roles: we will see an expansion of roles to include Chief Artificial Intelligence Officer type roles in organizations that have decided they are going to double down on AI.
This role will be like a CIO, but with additional (or separate) AI responsibilities. No one knows what the requirements really are going to be, everyone's just making it up, it's going to be defined by the first few people who become CAIOs who will then define the shape of all future CAIOs.
I expect these roles to be filled by 'AI boosters', people who are true believers who truly believe that adding AI to anything will help with everything.
They will probably only advocate for Microsoft products in this, as in other, areas.
Data scientist roles: this one is interesting! I expect to see a larger expansion of data scientist roles in order to staff up on more AI-focused projects.
I don't expect that the shape or job description for such a person is going to be structurally different from existing data scientist in government jobs, but they will staffing more projects which are going to be funded specifically to test or use more AI-specific workflows or experiments.
IT Specialist / Technology Expert (AI) roles: people who are going to have to implement AI projects, just like they already implement existing technology projects. It just happens to have AI magic and stardust now. To these folks, it's still stakeholders to manage, deadlines to meet, but they're going to also now know a thing or two about large language models or algorithms (or at least how to know when something has gone terribly wrong).
Administrator roles: I don't expect administrators in government to change substantially other than for how existing administrators should probably consider attending some training on data ethics, algorithmic bias, and the legalities of all of this so that they know when to call off a project or so they know how to hire leadership for these roles. Especially important since, who has the qualifications for most of this work anyway? (Other than the most of the data scientists!)
AI-informed privacy counsel roles: Makes sense that we will see more legal roles requiring additional training in AI and privacy and ethics.
Enforcement of AI audits, AI bias will not be as staffed up as we need
It's just one of those things everyone agrees is important. But it will probably not be as staffed up or powerful as needed.
A recent paper on how New York City will audit algorithms did not inspire much confidence. As cities around the United States tighten their belts this and next fiscal cycles, it's highly unlikely that we will see much enforcement beyond lip service and opt-in or voluntary reporting.
We will see a divergence between AI Corps teams and Digital Services teams
To go even further, maybe we will see more organizations like DHS's AI corps form, just like Digital Services teams formed around the world in the past decade or so. There will be C-level executives who will be expected to have managed and run AI-related P&Ls and hiring. For everyone else, the work might take a while to change in any substantial form.
Culture struggle between enterprise IT teams and digital services teams
Governments, outside of the a handful of institutions hiring AI specialists and who are interested in training their own data models, are probably going to become users of existing AI models, probably sold to us by Microsoft (let's be real, it's not like we can easily buy other stuff).
It's going to be enterprise IT teams who get some form of AI services from a large vendor, who will then be looking for things to do with it. Just as government employees have had access to various vendor-provided low / no-code tools, it doesn't mean it's the right tool, and it doesn't mean that there will be the right assortment of skills and people to use the tools to make the right thing, either.
Some digital services team who build softwares for the public, may have different opinions about using technology just to use technology. At some point, these groups will have to find alignment. Perhaps the newly formed AI corps types of leaders will also get involved with both groups.
Not the revolution we are told to expect
I have also been saying in private meetings and presentations: fundamentally, I think jobs will be remain quite similar. It's not the sea change we are hearing we should expect. My personal prediction is that digital jobs in government will continue to hire product, program, project (what's the difference? You're in luck, I'm writing a book about exactly that!) people; software people, software architects maybe, designers, researchers, content strategists, directors and C-level people in those areas.
But instead of 'has experience shipping web applications' there might also be a requirement in the near future to 'know what AI is about' or to have had experience in using AI systems or tools or to have run AI-adjacent projects.
The best thing governments can do is to center the public, and be extremely skeptical of vendor claims
And especially marginalized communities.
I understand that there is plenty of appetite right now for AI-related experiments and tasks and tools. But no experiments are worth gambling any amount of public trust; and in these increasingly difficult times for democracy, this is more important than ever.
This is the framework I have been using:
- does this (thing) solve an existing problem?
- what is (this problem)?
- what is (thing)?
- what are the privacy risks?
- who (government agency, vendor, or specific communities) will benefit from (this)?
- can it be done without AI?
An even easier rule of thumb: in one sentence, why are we using AI? If someone says 'it's for productivity', ask them how exactly? Productivity gains have not been proven, and it has even been shown that use of AI has led to greater burnout. In government, where it is not easy to hire staff or to change their job requirements, staff retention is more important than ever.
In April this year, I gave a presentation to some govtech teams in the Singapore government: centering marginalized communities in a world of AI. In it, I make the wild claim that a lot of the benefits that we are hoping to see with AI can probably be more cheaply and more effectively brought about by writing better content.
If you have questions, comments about any of this, I'd love to hear from you. I am also available for consultations with individuals and companies. Not all of my claims are wild. Just a few of them.