Upskilling Your Workforce for the Age of AI

While there is a great deal of apprehension about AI and its impact on the workforce, public servants who embrace innovation and continuous learning can use the technology to deliver more efficient and effective services that better meet the needs of the residents they serve. Read a transcript of Rick Maher, Seth Harris, and Beth Simone Noveck on this conversation.

Listen to the AI-generated audio version of this piece. 

On March 28, 2024, Seth Harris, Senior Fellow at the Burnes Center for Social Change and a former top labor advisor to President Biden; and Beth Simone Noveck, Director of the Burnes Center and Director of InnovateUS, talked with Rick Maher, President and CEO of Adaptive Human Capital, LLC., on behalf of NASWA’s Workforce Information Technology Support Center about Upskilling Your Workforce for the Age of AI.

The session, co-hosted by the National Association of State Workforce Agencies (NASWA) and InnovateUS, provided participants with practical advice and examples of how public servants can get started with using AI to improve the services they deliver to residents. Beth and Seth also addressed common concerns surrounding AI, including fears of job displacement and the challenges of integrating these technologies within existing systems. While there is a great deal of apprehension about AI and its impact on the workforce, public servants who embrace innovation and continuous learning can use the technology to deliver more efficient and effective services that better meet the needs of the residents they serve.

The following is a transcript of their conversation. The transcript has been edited for clarity.

 

Rick Maher: Why is now an important and critical time for us to focus on digital innovation and rethinking how governments deliver services?

Seth Harris: Well, so I'll tell you something you all know and that is there's a tremendous amount of skepticism among the public in the ability of government and other large institutions to solve problems. And I think government can, but like other large institutions, there needs to be a process of continuous improvement in government and I think elsewhere. And that's especially true for institutions like the workforce system, particularly the UI [unemployment insurance] system, which is so closely plugged into environments and other institutions where there's change. There's demographic change, there's economic change, there's change in job mix, there's change in skills mix. And so in order to keep up we have to be continuously improving. 

So I'm not an advocate for any particular tool or any particular strategy. In fact, I always used to say to my staff, at the Labor Department, don't fall in love with strategies – fall in love with outcomes and output.  I think these tools can help us to produce better and greater outcomes for the people that we serve, but you should only use them if they are going to accomplish that result. And if you keep your organization focused on continuous improvement, you're going to produce a better result for everybody involved. 

Beth Simone Noveck: I will just double down on what Seth is saying about the impact of technology and digital on how we deliver services in government. And I think with the advent not only of AI, but now in the recent past of generative AI, that distance is only going to increase. That rate is only going to accelerate as more and more institutions and organizations in the private sector, as well as the public sector, start to adopt these tools, which are helping people to write in plain English, more easily write first drafts of documents, more simply analyze data, which is helping them to get better at that kind of performance improvement that you're talking about. Create and offer 24/7 services with the use of chatbots where people can ask a question and get an answer to it. So I think that charge is getting even more urgent for us to avoid falling behind, and in fact, go through the route ahead because the needs, as we know, are so dire. The needs are so extraordinary, that people are facing in their lives, that we're facing as a society, the needs are so important and we have to do more and we have to do better. The good news is I think with these tools, is we have the opportunity really to close that gap to do even more with less and that's what I hope we're gonna get into.

This feels like a transformational moment. The AI technology that’s emerging now just seems so big that it's kind of hard to ignore.  Why have you been so focused on this particular emerging technology in your focus on innovation and government – why AI? And how do public servants move beyond “window shopping” in their understanding of this technology, to really use AI as part of their work? 

Noveck: I think the first thing we have to do is to get beyond the hype. And really, beyond the Doomerism. So there are a lot of headlines now about the robot apocalypse, or about how AI is going to destroy humanity. That's really obscuring, I think, a much more sober and simple conversation about what these tools really are. These are just data processing tools. They are really, really good at helping us to create or to analyze large quantities of data.

So in that sense, in many ways, despite the name and despite the hype, these tools aren't new. We've been talking about data for a long time and what can do for us to help us improve performance. That, I hope, Seth is going to get into more for the sector that’s specific to him. But I think if we start by focusing on the fact that these are tools that help us to make sense of data, that Generative AI, which is simply AI that has been trained on software that has ingested a large number of words from the internet. So just like a child learns how to speak from listening to its caregivers and parents talk, and then repeats those patterns, that's what generative AI is doing. So it's really good at manipulating large quantities of words. And words are just another form of data. So these tools, if we get past that, I think we can get past the “window shopping” or headlines and hype and simply say to ourselves, “What problems are we trying to solve? Where can data help us to do that better and differently?” 

And especially when we start to realize that data includes words, there are some very simple things we can do. And I'll just give you one quick example. In the state of New Jersey, as you know where I'm from and where I've been doing work for the last several years, one of the things that we do in our unemployment context is we've taken all the letters all the templates of letters that we send out to claimants, and we've used generative AI to help us rewrite those letters in plain English again, always supervised by human oh well written and overseen and edited by a human. AI not on top, but on tap. But it's helping our very, very, very excellent public sector workers get faster and better at writing those plain English responses, very simply overnight, decreasing by 50% the time it takes to read that letter increasing the time it takes for people to respond and act on what we're asking them to do. Very simple thing. It's just about data.

I love the insight that AI is not here to replace people, but to augment them. And I think that's a very important principle for us to lay out for this audience today. Seth, going back to you, can you tell us more about how you see AI and its role in workforce development?

Harris: I want to echo what Beth just said about, you know, when you see words like “transformation,” it’s kind of scary. It's just that the whole world is going to be different. And that's not to me, that's not the right approach to take. I think Beth just gave a really case-specific example of how AI can help you perform your job better and for your customers to be served better. And I'll give you three others very, very, very quickly. 

Beth already mentioned high-quality chatbots. You can interact with your customers at a much higher rate, with better information, with less time pointing people to the right places, using AI and machine learning. There are a lot of examples out there. New Jersey has a great one, business.nj.gov, that I think it's just a really, really good example of a chatbot.

I'll give you another example. All of us in the UI system wrestle with the problem of improper payments – and you all know how fraught that discussion is, how it's mischaracterized as checks going to dead people and checks, people in prison, and that's not at all what improper payments is except in a very small percentage of cases. It's actually a very complicated problem, and there are multiple causes of improper payments. Well, AI can analyze the mountain of data that you already have, from the NDNH [National Directory of New Hires], from the tax records, from your own administrative records from your interactions, when you are at risk of an improper payment, and also to assess your improper payment record and how to solve it. And maybe, that now can go down fairly dramatically. 

And then a third example that I'll give you is, AI I think is going to help all of us, those of you who are involved in the RESEA [Reemployment Services and Eligibility Assessment] process in the UI system, those of you who are involved in the workforce system in advising people about job training opportunities, we're going to be able to much better personalize training plans and job placement plans. By using the data that's available to us about existing searches that people have done, existing skill sets that people have, and how those interact with certain training interventions. Not merely “Do you want to be a welder? Click here,” but also, “How do you learn? What do you learn? How do you learn well? What kind of institution do you learn well in? What kind of job are you best situated for?” You have mountains of data available to you in your systems, and there are other data out there in the world that AI can help you to gather that will allow you to give much higher quality advice to UI recipients in the reset process and also to job seekers in the job search process in your one stops or in whatever system that you run in your state. And you're gonna have better outcomes. And let me just say in the UI system, we could end up saving you a bunch of money in benefits payments to people who can go out and get good quality jobs.

Ai Generative

Photo from Pixabay

I want to dig a little bit deeper into the value of generative AI in the workforce system. You have given us some good tips about the practical applications of this technology already. But it occurs to me that people are just really stressed about this, maybe because they feel like technology can replace them rather than augment their work. 

Noveck: I think it adds to that stress. You mentioned something at the outset, where you suggested that people sort of consult whatever their state’s policy or rules are. A lot of states don't have policies or guidance or rules yet. So I think that adds to the concern. People are operating in a vacuum where in some cases, states have issued proactive policies, either banning the technology or saying you can go use it. In many cases, it's really silent, or you know, there's a blanket sort of general pronouncement or task force that's created, but then without specific agency-by-agency guidance. So people are wondering: “What is it that I can do?” Or even when there's guidance, then the question of “Which tool can I go out and use? Do we have a license for it? Am I what can I type into it? What can I not?”

So that uncertainty, that fear, I think is very widespread and very normal. And I would say the best way to overcome it is to try these tools for yourself just at some point. We're not advocates for any one particular tool, but the great part about these generative AI tools is that they are driven by plain language plain English instructions. You don't have to be a programmer. In fact, there's no advantage, I think, to being a programmer. So whether it's OpenAI’s ChatGPT tool, or Anthropic’s Claude tool, or Google's Gemini tool, there are a lot of options out there that add to the confusion. But they all have free options that you should go out and try. They will also work on your phone. And if you don't know about whether you can try them at work, then try them at home or better yet, have your kid try them for you. You kind of use them as they probably did with the internet and social media and other things. It's the best place to start. And it's only by trying them that I think you've come to realize what these really are is sort of like a better word processor. And they're pretty nifty for that and there are great things they can do. And some things that they're not too good at but it helps to dispel a lot of the fear.

There’s a question in the chat that says that procurement can be difficult, and that folks responsible for approval of contracts are unfamiliar with AI. And I suspect that a lot of the fear is based on people being unfamiliar. As Beth points out, step one is to start to get familiar with these things, play with them yourself, personally, and get knowledgeable about them and demystify them.

Harris: So let me just say, there is no tool in the world that's going to unscrew the procurement process. Sorry. 

But let me tell you something, I'm going to start with something you definitely should not do. And that is: Don't tell your workforce that AI is going to be horribly disruptive and all your jobs are going to be obsolete. And I'm not making a joke. There are employers that are sitting in some of the public sector that are saying a version of that. And they should not be saying that because there's always stress and discomfort around new technologies being introduced into labor.

I introduced – and you're gonna laugh at this because this is gonna sound like the Stone Age – I introduced cloud-based email in the Labor Department. We were among the first in the federal government to have a cloud-based email system. And even though everybody in the department hated their existing email system, there was real resistance to go into something new, because they didn't know what they were going to experience. They had heard things, they were worried about things. Everything was going to be lost, I was never going to be able to communicate. And everything ended up working out fine. There was a transition process and everything ended up working out fine. 

Now, AI as you introduce it with respect to particular functions, which is what I'm advocating for, is going to disrupt your system as any kind of change, even a change in a business process, will do. But I want to emphasize a point that you can use to empower your employees. You can use it to engage your employees. You can use it to show them that it will help them to improve their performance inside the organization. And that's what I think most employees are looking for. That it can be constructive and not destructive. But when we say the best way to approach this is to be open about it. If you have unions, engage in a labor-management process to talk about how to introduce the technology into the workplace. If you unfortunately don't have unions (and I know that's not up to you all, that's somebody else's responsibility), be transparent. Engage folks. Welcome feedback. Let them suggest ideas about how AI can be used. You know, the idea that you're going to be the first to introduce all of your employees to AI is laughable. Many of them were already using it. Microsoft just rolled it out. A lot of folks are rolling it out. Even I have used AI and I'm a Baby Boomer. I have used it to take pictures and to create content and to do other things. So I think if you engage, if you're transparent, and if you're positive about it, I think that your employees will be maybe a little trepidatious but open to it. And if they see it's going to help them do their jobs, I think they're going to be welcome.

Beth, you've been using your voice nationally on this subject as part of a national dialogue on generative AI tools, and more broadly how to improve how governments deliver service. You recently testified to Congress on the subject of AI, and I note that Congress, just after that, passed bipartisan legislation to create an AI Task Force. What did you tell Congress about AI and what do we need to know about that? 

Noveck: So to be clear, there's correlation but not causation there, when it comes to the AI Task Force. But one of the interesting things about that task force is that it really does two things. 

So a lot of the conversation in Congress was about the external issues – about the impact of AI more broadly on the workforce, the economy, geopolitical issues with regard to competition between our tech industry and that of China, was the one range of what I'll call the external issues. But there was also a focus in Congress, and that hearing was squarely one of those opportunities, where there was a focus on what I would call internal innovation. So how can AI help us to improve how government works? And in the five minutes they gave me, among the things I sort of doubled down on, one is the opportunity to use AI to create that more conversational government. To use chatbots, to be able to deliver services 24/7. And to do that, not just externally facing, so the kinds of things we're all used to from businesses where you type in and ask a question and get a response. And increasingly governments are doing that. But to also use them internally. So in New Jersey, a lot of our call centers are using chatbots to help employees get answers to their questions so that humans can then give a faster answer. 

And I work with students here at the Burnes Center in our AI for Impact Program, where they are working with state agencies, in fact, to build precisely such chatbots. For example, we’ve got a team working with the Massachusetts Department of Transportation to ingest their documents, and to use that chatbot to be able to better train their own engineers, for example, again, not a public-facing thing. So it's very low risk, but helps internally with training for people to get answers to questions faster. We talked a lot about how to use AI to make government more accessible. AI is really helpful, again, it’s good at manipulating large quantities of words. It's really, really good at translation. Do you think Google Translate is good? The whole new generation of translation tools is really unbelievable. So for example, Metta that company, formerly known as Facebook, has built tools that are all free and open source that can allow you to translate into and out of 100 languages and 35 languages in speech, or speech-to-text, text-to-speech, just a dramatic improvement in our ability to provide multilingual services to our residents and make government more accessible.

The third thing and this is again, the point that you made earlier, as I said it's really, really important that we use these tools to support our public workers not to replace them. And there's research on this now by the yard that shows that workers using AI are more productive. AI by itself? Not too smart. Not so good. Humans working with AI are really what is the powerful combination. It's that human, you know, writing a faster first draft, writing something in plain English using it to translate. That's really where the magic happens. And so we have to be thinking about how we're combining things. You mentioned Microsoft. Their co-pilot tools, for example, to write code 50% faster, with the help of AI. And again, you can't get rid of the humans in this case. But it does, to your point, Rick, give them better work-life balance and helps us push out those digital services even faster. 

So in the interest of time, let me just say two more quick things I won't go into in any detail. We talked quite a bit and there were a lot of questions about using these tools to process and analyze data. And it's especially good at that. And that's really where the power is. And Seth already gave some of those examples. So I'll leave it there. 

And the last thing I'll mention is the ability now to use these tools to better communicate and listen to citizens. So I mentioned at the outset that text is just another form of data. When you get comments from citizens. We're all used to those resident comments on the websites and elsewhere. You know, they're sometimes really voluminous. Nobody had the time to read them, yet people took the time to write to us. What we can do now so much more easily is actually make sense of those comments because we can say, summarize those comments for me, you know. It takes one of those free open models, you know, about a minute to analyze 75,000 – the length of a solid novel – in one minute. You can say give me a summary. I can similarly do that with now hundreds or even thousands of resident comments when people are telling me, “Here's how I'd like my UI system to work better. Here's how you can improve the service. Here's how my experience was in the One Stop.” That's something that we can actually make sense of and use to change what information we give people, what services we give them and how we respond to them. 

Ai Hand

Photo from Pixabay

Seth, back to you. You're an expert in this workforce development system and you've seen it from every angle over the years. What advice do you have for us in the workforce system today? Not just about AI, but in the broader sense of how should leaders be approaching this time and approaching this need to innovate despite the fact that they're feeling under-resourced and overwhelmed?

Harris: If you accept my premise that these tools, like other tools, are just part of a process of continuous improvement, then I think the best advice I can give (and I’ll use awkward and fancy academic language first and then I'll translate it into English using my AI brain) is about using an inductive process, not a deductive process. Don't declare, “We're going to transform everybody,“ as though people are going to know what that means. And as leaders you already know that. Start with something. Pick something that you think AI is actually going to help you to do something better. You know, a chatbot is a good example. But if you're nervous about using AI in a customer-facing context, as Beth said, use it for some kind of data analytics that you will use in your decision-making and test it out in that way. Or try it out on your improper payments analysis or some other thing where it's just a start, where you let your staff get a sense of what it is capable of doing. 

One is content creation, as Beth was saying. One is data analysis and focusing on very particular functions, particular problems that you are trying to solve, and see AI or an AI tool can help you to do that. And let me just say that’s where I started. I do a lot of writing. I started using ChatGPT, not for all of my content, but for some of it. It's good at some things, it's not good at others. Sometimes it gets things wrong. I have to oversee it, right? I've used it for graphics, and I've used it for other tools. So I think start with examples, particular things, and then expand from there. And you know, maybe you'll have a transformation, maybe half the things will work and half the things won't. You know it's not going to work everywhere. Try it where you think it's going to work. That's a good way to get started.

Now we’re going to take some questions from the audience. One audience member says: Our state has heavily limited our predictive AI use and almost completely banned generative AI use. What advice do you have? How would you recommend we start a dialogue about lessening these restrictions in our state?

Noveck: So I would point to the fact first that Ohio, New Jersey, California (and a longer list of states) have all come out with and posted online policies that you can point to. So there are a lot of states that have gone out proactively and said we encourage people to use these tools. So to the extent to which this is coming from an agency that said look, the people on top of us in the governor's office had been silent on this issue or have issued a ban on this issue. You know, what is it that we can do? You know, I would say the place to start is – back to the point about picking something, picking a low-risk use to discuss with people. This means not public facing. Don't start with a public chatbot, don't start with something that's going to involve people's personally identifiable information. But that issue of saying, like, “Look, we could use this to help us draft things in plain English better. We could use it to help us with meeting transcription. We could use it to help us get a first draft of a translation into whatever our biggest minority languages in this jurisdiction.” Again, working with a human translator but, to help us maybe remedy what's often a shortfall. It's hard to get the translation services. 

I'll point you to the fact that even Congress itself has given out 250 ChatGPT licenses to its staff. Again, as a pilot. It's not everybody, but as a pilot, they've given out licenses without much fanfare about procurement and said go out and try it and tell us how you're using it. So picking low-risk uses, encouraging a pilot, reminding them you can see these tools. The free ones are free and even paid ones like Claude’s new version cost 20 bucks, right, no big deal expenditure. It's very de minimis that we can go and get a few people trying it for a month and sit around the table and have a conversation about what we can do. So long story short, encourage a conversation around a set of low-risk uses with leadership and report back there. 

One other idea too, that I think there's a role for NASWA, to play in this dialogue. There are states already doing this, right? So to the extent that somebody in a state with a restrictive policy can go to their bosses and say, “Hey, this other state is using this tool that you prohibited me from using to accomplish the following sets of things. Wouldn't we like to have that too? Could we modify the policy to allow that use and then try it for a while?” The inductive approach starting with particular functions may be a way to break the barrier. Fighting at a high level of abstraction is not going to work. 

Another question from the audience: What are the best methods to monitor, to ensure AI tools are not having disparate impacts on customers and stakeholders? We hear a lot about how AI is going to harm people. What's your best advice here?

Noveck: So I think it's a great question. Obviously, this is one of the risks that we have to mitigate as we adopt these tools. We've seen this, or you'll see when you play with them. You type into ChatGPT and its image generator tool, which is known as DALL-E, and then back to the artist, if you ask it, “Give me a picture of a group of scientists,” it will very likely spit out a picture of white men with beards sitting around the table. I just saw someone post exactly this picture the other day. By contrast, Google in an effort to mitigate that problem, overcorrected. And now if you say to it, give me a picture of a group of Nazis, it will show you all African Americans. As we know, not too many African American German Nazis in the 1930s. Not a big demographic. You have to work hard to get it to give you a white man. They have, of course, now walked that back and are working on it. So you know, since these tools are very, very new, they are as we know, they are trained on data and there's bias out there trained on data. And that data reflects a lot of our human biases. So we have to be careful about these things. 

Again, they're evolving. They're getting better. The companies that make these tools, the more that we demand these changes, the more that we call these problems out of bias, and the better the tools you're going to get. But we also have to be sure again, that it's always AI on tap, not and on top. That is to say that we are exercising control over how we use them. That we’re not just using something that the AI has written, whether it's a picture, whether it's data or whether it's text, but instead we're using these the way that we would use a good research assistant to help us but not to have the last word on how we do things. Making sure that we're not being exclusionary is going to depend on us exercising our good judgment as public professionals.

If people want to get access to more InnovateUS resources, and start to upskill themselves or others, where should they go?

Harris: I'd say go to the InnovateUS website. There's a very nice collection of available courses. There is also a place where you can contact us if you have a question about it. Sometime in the next few weeks – we actually don't have a schedule for this, but my expectation is, in the next few weeks, maybe a month – we're going to roll out a course that's specifically directed at this issue. We already are teaching people about AI for public servants. So there's already some general material that’s available. But we're talking about a course that will be a little bit more targeted to workforce folks that will give you really, I think, a very good introduction to AI and its many different uses. There are lots of great resources readily available for you there.

Noveck: If you want to hear more from Seth, you should also follow the Power at Work Blog. There’s really great content there as well about workers, unions, and building worker power. InnovateUS also has weekly workshops on AI and will soon be launching an at-your-own-pace course. We welcome outreach from leaders in agencies who are looking at how they can use these resources for staff for training. We're very happy to have that conversation, all free.

InnovateUS provides no-cost at-your-own pace and live learning on data, digital and innovation skills for public servants like you. To learn more about our course offerings and workshops, visit innovate-us.org