FTC to Review AI Chatbot Risks With Focus on Privacy Harms

 The US Federal Trade Commission plans to study the harms to children and others of AI-powered chatbots like those



offered by OpenAI, Alphabet Inc.’s Google and Meta Platforms Inc., according to people familiar with the matter.


The study will focus on privacy harms and other risks to people who interact with artificial intelligence chat


bots, the people said. It will seek information on how data is stored and shared by the services as well as the dangers pe


ople can face from chatbot use, said the people, who asked not to be identified discussing the unannounced study.


The FTC didn't immediately respond to a request for comment. A White House spokesperson didn't comment spec


ifically about the FTC study, but said the agency is proceeding with user safety


ety in mind as the administration hosts an artificial intelligence event with industry leaders Thursday.


“President Trump pledged to cement America’s dominance in AI, cryptocurrency and other cutting-ed


ge technologies of the future,” White House spokesperson Kush Desai said in a statement. “FTC Chairman Andrew F


erguson and the entire administration are focused on delivering on this ghost


ndate without compromising the safety and well-being of the American people.”


Watch More Image Part 2 >>>

Chatbot developers face intensification scrutiny over whether they’re doing enough to ensure safety of their ser


vices and prevent users from engaging in dangerous behavior. La


st week, the parents of a California high school student sued OpenAI noted that its ChatGPT isolated their son from family and helped him plan his suicide in Ap


il. The company has extended its sympathies to the family and is reviewing the complaint.


Regulatory Scrutiny


The FTC’s plans underscore regulators’ interest in the exploding use of artificial intelligence despite recent administration


stration directives that the technology be allowed to grow unimpeded with a lighter regulatory touch. In July, the White


House issued guidelines urging agencies including the FTC to show more restraint in probes involving AI and stand down on cases that put innovation at risk.


The White House is hosting tech industry leaders Thursday including Meta’s Mark Zuckerberg, Apple Inc.’s Tim


Cook, OpenAI’s Sam Altman and Microsoft Corp.’s Satya Nadella for an artificial intelligence event hosted by First Lady Melania Trump.


OpenAI declined to comment and pointed to a Tuesday blog post outlining actions they're taking. Meta declined to comment. The company has taken steps


recently aimed at ensuring that chatbots avoid engaging with minors on topics including self-harm and suicide. Alphabet didn't immediately respond to a request for comment.


The first lady announced last month that she was launching a presidential challenge to encourage students to use em


erging AI technology to find solutions to community challenges. The effort will also encourage educators to adopt AI in the classroom, the White House has said.


The agency plans to conduct the study under its so-called 6(b) authority to compel companies to turn over information to help it better understand a particular market or technology. The FTC will seek information from the nine largest consumer chatbots, the people said. Those include OpenAI's ChatGPT, and Google's Gemini, among others.


AI Startups


Other recent FTC studies include an examination of tech giants’ investments in AI startups and a study on drug pricing. The agency generally issues a report on its findings after analyzing the information from companies.


FTC Commissioner Melissa Holyoak called for such a review at an agency event in June, saying the effort should explore potential online harms to children including the use of “addictive design features” and the erosion of privacy protections.


Holyoak said at the event that the agency should look at “generative artificial intelligence chatbots that simulate human communication and effectively function as companions,” at the event. She reports of “alarming” interactions with young users, including “providing users instructions for committing crimes, involving them to commit suicide, self-harm or harm to others, and discussing and role-playing romantic or sexual relationships.”


The FTC’s Ferguson said AI companies “need to be honest about how they’re describing their products to consumers,” in an interview with Bloomberg Television last month.


The Wall Street Journal earlier reported on the planned study.

Đăng nhận xét

Mới hơn Cũ hơn

Support me!!! Thanks you!

Join our Team