Breaking News

Do you want to refuse to use AI? It’s easier when the AI ​​needs to be labeled. : NPR

Red STOP AI protest flyer with meeting details recorded on a lamp post on a city street in San Francisco, California, May 20, 2025.

Smith/Gado/Getty Images Collection


hide caption

toggle caption

Smith/Gado/Getty Images Collection

Utah and California have passed laws requiring entities to disclose when they use AI. More states are considering similar legislation. Supporters say the labels make it easier for people who don’t like AI to refuse to use it.

“They just want to be able to know,” says Margaret Woolley Busse, executive director of the Utah Department of Commerce, which implements new state laws requiring state-regulated companies to disclose when they use AI with their customers.

“If that person wants to know if it’s human or not, they can ask. And the chatbot has to say it.”

California adopted a similar law regarding chatbots in 2019. This year, it expanded disclosure rules, requiring police departments to clarify when they use AI products to help write incident reports.

“I think AI in general and police AI in particular really thrives in the shadows and is most successful when people don’t know it’s being used,” says Matthew Guariglia, senior policy analyst for the Electronic Frontier Foundation. who supported the new law. I think labeling and transparency is really the first step. »

As an example, Guariglia cites San Francisco, which now requires all city departments to publicly report how and when they use AI.

Such localized regulations are the kind of thing the Trump administration is proposing. tried to run away. White House “AI czar” David Sacks spoke of a “state regulatory frenzy that harms the startup ecosystem“.

Daniel Castro, of the industry-backed think tank Information Technology & Innovation Foundation, says AI transparency can benefit markets and democracy, but it can also slow innovation.

“You can think of an electrician who wants to use AI to communicate with their customers…to answer questions about their availability,” Castro says. If companies have to disclose the use of AI, he says, “maybe it will turn off customers and they won’t really want to use it anymore.”

For Kara Quinn, a homeschooler in Bremerton, Washington, slowing the spread of AI sounds appealing.

“I think part of the problem is not just the thing itself, but how quickly our lives have changed,” she says. “There are perhaps things I would embrace if there was a lot more time for development and implementation.”

Right now, she’s changing her email address because her longtime provider recently started summarizing the content of her messages with AI.

“Who decides that I won’t read what another human being wrote? Who decides that this summary is actually what I’m going to think of their email?” Quinn said. “I value my ability to think. I don’t want to outsource it.”

Quinn’s attitude toward AI caught the attention of her sister-in-law, Ann-Elise Quinn, a supply chain analyst who lives in Washington, DC. She hosts “rooms” for friends and acquaintances who want to discuss the implications of AI, and Kara Quinn’s objections to the technology inspired the theme of a recent session.

“How can we opt out if we want to?” » she asks. “Or maybe [people] do not want to withdraw, but they at least want to be consulted. »

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button