© 2025 SDPB Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

TechTonic Justice's Kevin De Liban discusses how AI can cause problems in government

MICHEL MARTIN, HOST:

We don't know exactly how DOGE - Elon Musk's entity focused on cutting the size of the federal government - is using artificial intelligence, but some say employing it at all within a government agency is cause for alarm. Kevin De Liban is a public interest attorney who offered our cohost, Steve Inskeep, his perspective. He began by detailing a case he won against the state of Arkansas, which had denied some benefits to Medicaid recipients.

KEVIN DE LIBAN: I had several disabled and elderly clients who were part of a Medicaid program that would provide in-home caregiving assistance that folks needed to live independently and stay out of nursing facilities. And these were folks with severe conditions - quadriplegia, advanced multiple sclerosis, cerebral palsy, things that don't get better. And suddenly, the state decided to cut their care drastically - by 50% in some cases - dropping people from, say, eight hours a day of care to four or five. And this meant incredible suffering. People were lying in their own waste. They were getting bed sores from not being turned. They were being shut in. And what we found out is that the state had implemented an algorithmic decision-making system to decide how much care people actually needed, and that choice was what was propelling the cuts.

STEVE INSKEEP, BYLINE: Would you explain an algorithmic decision-making system? That just means there's a computer that's been given some parameters, and it says this person gets this many hours of care.

DE LIBAN: Yes, exactly. And some of these systems are based on really advanced forms of statistical analysis, as was the case in Arkansas. Others of these systems are just kind of haphazardly thrown together.

INSKEEP: What did you do about this?

DE LIBAN: So we launched multiple lawsuits that were successful in federal court and state court. We undertook a massive public education campaign to help the wider public understand why this was dangerous and harmful. We helped activate the community most affected by this - disabled folks, their allies, their families. And ultimately, we were able to win both in the courts and politically, with the Arkansas state legislature determining that this was too cruel even for Arkansas.

INSKEEP: So what do you think about when you bring this past experience to the news of recent weeks, as the Department of Government Efficiency - or DOGE - has been rummaging around in federal computer databases?

DE LIBAN: Well, I understand that most of the time when AI is being used for governmental purposes, it means that the public is likely to be harmed. I just haven't seen any instance where AI was implemented for governmental functions where that wasn't the case.

INSKEEP: What do you mean?

DE LIBAN: So every time government uses AI, whether it's for public benefits administration, people get cut. Whether it's to reduce staff or increase efficiency, what ends up happening is services are harder to access, longer wait times, more frustration getting accurate information. So AI use in government generally hasn't worked well, particularly when there are no regulations or safeguards to ensure that this stuff isn't used for harmful ends. And in the case of DOGE, it's actively being weaponized to target President Trump's enemies, you know, diversity efforts, science, journalism, people who are transgender, civil rights.

INSKEEP: I guess we should be clear that there's not a lot of transparency here. We're not entirely sure what DOGE is doing with various databases, and one of the stated purposes is to find waste or to find fraud. Could some kind of high-tech computer analysis of a database help to find those kinds of things?

DE LIBAN: No, not actually. So first of all, there are instances where governments have tried to use AI to detect fraud, for example, with unemployment benefits. Those were disastrous. A famous case in Michigan led to 40,000 people being falsely accused of having committed unemployment fraud. That was 93% of all people accused were falsely accused. And that drove them to financial desperation, and in some cases, even self-harm. So we know that when AI systems are used for something like fraud detection, it's not capable of doing that well.

On top of that, Steve, there is a shell game here, which is that fraud has a specific legal definition, right? It's that you are intentionally trying to deceive the government to get benefits or a contract that you otherwise wouldn't. What's happening with DOGE is they are just going to program anything that the president disfavors to show up as fraud.

INSKEEP: Kevin De Liban is an attorney with TechTonic Justice, which advocates on AI issues. Thanks so much.

DE LIBAN: Thank you so much for having me, Steve. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Steve Inskeep is a host of NPR's Morning Edition, as well as NPR's morning news podcast Up First.