Aflac on Friday disclosed a cybersecurity incident in which personal information of its customers may have been compromised, making it the latest insurance provider to be targeted.
The health and life insurance firm said the attack on its U.S. network, which was identified on June 12, was caused by a “sophisticated cybercrime group,” but did not specify a name.
It said it was unable to determine the total number of affected individuals until a review, which is in its early stages, is completed.
The company said it was able to stop the intrusion within hours and has reached out to third-party cybersecurity experts to investigate into the incident.
The company said the potentially impacted files contain personal information of its customers, such as social security numbers and health-related details.
Aflac offers accident and pet insurance plans in the U.S. and Japan. It manages personal, medical and financial data of more than 50 million policyholders.
Health insurers have been facing increased cybersecurity risks recently with UnitedHealth’s breach being the most notable example impacting 100 million people last year.
UnitedHealth’s Change unit was breached by a hacking group called ALPHV, also known as “BlackCat” who are estimated to have stolen a third of Americans’ data in one of the worst hacks to hit the U.S. healthcare sector.
Shares of Aflac fell 1.3% in premarket trading.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”
The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.
“The order certainly sets it up as a potential test case for some broader issues involving AI,” said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show “Game of Thrones.” In his final moments, the bot told Setzer it loved him and urged the teen to “come home to me as soon as possible,” according to screenshots of the exchanges.