Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Want to opt out of AI? State labeling laws might help

Red STOP AI protest flyer with meeting details taped to a light pole on a city street in San Francisco, California on May 20, 2025.
Smith Collection
/
Gado/Getty Images
Red STOP AI protest flyer with meeting details taped to a light pole on a city street in San Francisco, California on May 20, 2025.

Utah and California have passed laws requiring entities to disclose when they use AI. More states are considering similar legislation. Proponents say labels make it easier for people who don't like AI to opt out of using it.

"They just want to be able to know," says Utah Department of Commerce executive director Margaret Woolley Busse, who is implementing new state laws requiring state-regulated businesses to disclose when they use AI with their customers.

"If that person wants to know if it's human or not, they can ask. And the chatbot has to say."

California passed a similar law regarding chatbots back in 2019. This year it expanded disclosure rules, requiring police departments to specify when they use AI products to help write incident reports.

"I think AI in general and police AI in specific really thrives in the shadows, and is most successful when people don't know that it's being used," says Matthew Guariglia, a senior policy analyst for the Electronic Frontier Foundation, which supported the new law. "I think labeling and transparency is really the first step."

As an example, Guariglia points to San Francisco, which now requires all city departments to report publicly how and when they use AI.

Such localized regulations are the kind of thing the Trump Administration has tried to head off. White House "AI Czar" David Sacks has referred to a "state regulatory frenzy that is damaging the startup ecosystem."

Daniel Castro, with the industry-supported think tank Information Technology & Innovation Foundation, says AI transparency can be good for markets and democracy, but it may also slow innovation.

"You can think of an electrician that wants to use AI to help communicate with his or her customers ... to answer queries about when they're available," Castro says. If companies have to disclose the use of AI, he says, "maybe that turns off the customers and they don't really want to use it anymore."

For Kara Quinn, a homeschool teacher in Bremerton, Wash., slowing down the spread of AI seems appealing.

"Part of the issue, I think, is not just the thing itself; it's how quickly our lives have changed," she says. "There may be things that I would buy into if there were a lot more time for development and implementation."

At the moment, she's changing email addresses because her longtime provider recently started summarizing the contents of her messages with AI.

"Who decided that I don't get to read what another human being wrote? Who decides that this summary is actually what I'm going to think of their email?" Quinn says. "I value my ability to think. I don't want to outsource it."

Quinn's attitude to AI caught the attention of her sister-in-law, Ann-Elise Quinn, a supply chain analyst who lives in Washington, D.C. She's been holding "salons" for friends and acquaintances who want to discuss the implications of AI, and Kara Quinn's objections to the technology inspired the theme of a recent session.

"How do we opt out if we want to?" she asks. "Or maybe [people] don't want to opt out, but they want to be consulted, at the very least."

Copyright 2025 NPR

Martin Kaste
Martin Kaste is a correspondent on NPR's National Desk. He covers law enforcement and privacy. He has been focused on police and use of force since before the 2014 protests in Ferguson, and that coverage led to the creation of NPR's Criminal Justice Collaborative.