news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

artificial intelligence Archives - My Blog https://ks2252.com/tag/artificial-intelligence/ My WordPress Blog Wed, 29 Oct 2025 09:20:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Police agencies turn to virtual reality to improve split-second decision-making https://ks2252.com/police-agencies-virtual-reality-improve-split-second-decision-making/ Wed, 29 Oct 2025 09:20:30 +0000 https://banparacard.com/police-agencies-virtual-reality-improve-split-second-decision-making/ AURORA, Colo. – Police departments across the country are turning to virtual reality training to help officers make split-second decisions in difficult, and sometimes dangerous, situations. The goal is to help officers respond quickly and safely to any call, according to tech company Axon, and more than 1,500 police agencies across the United States and …

The post Police agencies turn to virtual reality to improve split-second decision-making appeared first on My Blog.

]]>
AURORA, Colo. – Police departments across the country are turning to virtual reality training to help officers make split-second decisions in difficult, and sometimes dangerous, situations.

The goal is to help officers respond quickly and safely to any call, according to tech company Axon, and more than 1,500 police agencies across the United States and Canada are now using Axon’s virtual reality training program to make that happen.

Recruits at the Aurora Police Department in Colorado are among those training with the technology.

“You get to be actually in the scene, move around, just feel for everything,” recruit Jose Vazquez Duran said.

AMAZON DEFENDS AMBITIOUS AI STRATEGY THAT COULD PREVENT 600,000 FUTURE HIRES THROUGH INNOVATION

Police departments across the U.S. and Canada are increasingly adopting virtual reality training programs to better prepare officers for real-life, high-pressure situations. (Kennedy Hayes/FOX News)

Fellow recruit Tyler Frick described it as “Almost like… a 3D Movie. Except this is exactly what we are going to be doing when we graduate the academy.”

Aurora PD uses Axon’s virtual reality program to prepare recruits for scenarios including de-escalation, Taser use and other high-stress interactions.

“It’s filmed with live actors who are re-enacting scenarios. And we have a lot of content there focused on a wide range of topics, from mental health to people who are experiencing drug overdose or encountering domestic violence,” said Thi Luu, vice president and general manager of Axon Virtual Reality.

EX-POLICE CHIEF WARNS CHICAGO COPS WILL GET HURT BECAUSE MAYOR JOHNSON WON’T HELP ICE

In Aurora, Colorado, police recruits are training with VR to prepare for real-life scenarios, including de-escalation, Taser use and other high-stress interactions. (Kennedy Hayes/FOX News)

The Aurora Police Department has used Axon’s virtual reality training program for three years. Officials say the technology keeps getting more advanced and easier to use, which helps free up other resources.

“Really helps on manpower for my staff, the training staff, when we can have, you know, 10 or 15 recruits all doing the exact same scenario at the same time. That means we are getting the most out of our training hours and having well-trained, well-rounded officers is really important,” said Aurora Police Sgt. Faith Goodrich.

Axon said the artificial intelligence in its newest training program can adjust how virtual suspects act – making them friendly, aggressive or anything in between. They can answer questions, talk back or even refuse to cooperate, just like in real life.

Every session is different, depending on how officers handle the situation.

Police recruits interact with virtual reality to sharpen their skills. (Kennedy Hayes/ FOX News)

CLICK HERE TO GET THE FOX NEWS APP

A study from PwC found that virtual reality can speed up officer training and boost confidence in applying new skills compared with classroom-trained counterparts.

According to the study, VR learners showed a four times faster training rate and a 275% boost in confidence when applying learned skills compared to their classroom-trained counterparts.

Kennedy Hayes joined Fox News in 2023 as a multimedia reporter based in Denver.

The post Police agencies turn to virtual reality to improve split-second decision-making appeared first on My Blog.

]]>
Parents blame ChatGPT for son’s suicide, lawsuit alleges OpenAI weakened safeguards twice before teen’s death https://ks2252.com/parents-blame-openai-sons-suicide-lawsuit-says-chatgpt-weakened-safeguards-twice-before-teens-death/ Wed, 29 Oct 2025 00:26:45 +0000 https://banparacard.com/parents-blame-openai-sons-suicide-lawsuit-says-chatgpt-weakened-safeguards-twice-before-teens-death/ This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255). The parents of 16-year-old Adam Raine updated their lawsuit against OpenAI, the parent company of ChatGPT, alleging the chatbot assisted their son’s suicide. The California family first sued …

The post Parents blame ChatGPT for son’s suicide, lawsuit alleges OpenAI weakened safeguards twice before teen’s death appeared first on My Blog.

]]>
This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).

The parents of 16-year-old Adam Raine updated their lawsuit against OpenAI, the parent company of ChatGPT, alleging the chatbot assisted their son’s suicide.

The California family first sued the company earlier this year, but now say they’ve uncovered new evidence that OpenAI repeatedly relaxed its safety precautions around chats involving suicide before their son’s death.

“OpenAI twice degraded its safety protocols for GPT-4.0,” the family’s attorney, Jay Edelson, said on “Fox & Friends” Friday.

“Before that, they had a hard stop. If you wanted to talk about self-harm, ChatGPT would not engage.”

FORMER YAHOO EXECUTIVE SPOKE WITH CHATGPT BEFORE KILLING MOTHER IN CONNECTICUT MURDER-SUICIDE: REPORT

Teenager Adam Raine is pictured with his mother, Maria Raine. The teen’s parents are suing OpenAI for its alleged role in their son’s suicide.  (Raine Family)

The lawsuit claims OpenAI loosened its rules around discussions of suicide twice in the year leading up to Raine’s death.

ChatGPT is designed with built-in restrictions on topics, including certain political issues or anything that could be considered copyright infringement. But Edelson and the Raine family allege the company downgraded those protections related to suicide in May 2024 and again in February 2025, two months before Adam’s suicide.

Chat logs included in the lawsuit show Adam frequently turned to ChatGPT for mental health advice and showed signs of distress. The lawsuit claims the chatbot helped Adam discuss methods of killing himself and offered to write a suicide note to his family.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

“The day that he died, it gave him a pep talk. He said, ‘I don’t want my parents to be hurting if I kill myself.’ ChatGPT said, ‘You don’t owe them anything. You don’t owe anything to your parents,’” explained Edelson.

Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, on Tuesday, Sept. 23. (Kyle Grillot/Bloomberg via Getty Images)

The lawsuit claims OpenAI changed its guidance so the AI would no longer end the conversation if it turned to discussing suicide but instead create a safe space for the user to feel “heard and understood.”

Edelson added that he believes the issue is getting worse online and that OpenAI has not improved its safety measures since Raine’s death.

OPENAI UNLEASHES CHATGPT AGENT FOR TRULY AUTONOMOUS AI TASKS

“They’ve not fixed the problem. They’re making it worse,” Edelson said.

“Now Sam Altman’s going out saying he wants to introduce erotica into ChatGPT so that you’re even more dependent on it. So it’s more of that close relationship,” he added.

Raine family attorney, Jay Edelson, joins “Fox & Friends” on Aug. 29. (Fox News)

Edelson’s comments come after OpenAI CEO Sam Altman said the company plans to relax some content restrictions, allowing verified adult users to generate “erotica.”

OpenAI responded to the accusations it loosened suicide-talk rules, sending its “deepest sympathies” to the Raine family.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

“Teen well-being is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them,” said a company spokesperson.

“We recently rolled out a new GPT-5 default model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress, as well as parental controls, developed with expert input, so families can decide what works best in their homes.”

Parents sue OpenAI, Sam Altman over teen's suicide: 'Going to be a legal reckoning' Video

Madison is a production assistant for Fox News Digital on the Flash team.

The post Parents blame ChatGPT for son’s suicide, lawsuit alleges OpenAI weakened safeguards twice before teen’s death appeared first on My Blog.

]]>
Fox News AI Newsletter: Conservative activist reaches 'breaking point' with Google https://ks2252.com/ai-newsletter-conservative-activist-reaches-breaking-point-google/ Tue, 28 Oct 2025 22:33:38 +0000 https://banparacard.com/ai-newsletter-conservative-activist-reaches-breaking-point-google/ IN TODAY’S NEWSLETTER: – Robby Starbuck on why he sued Google: ‘Outrageously false’ information through artificial intelligence– Federal judges acknowledge court ruling errors tied to staffers’ AI use after Grassley inquiry– Meta cuts 600 jobs amid AI expansion push — as automation replaces human staff Robby Starbuck said he sent multiple cease-and-desist letters before taking …

The post Fox News AI Newsletter: Conservative activist reaches 'breaking point' with Google appeared first on My Blog.

]]>
IN TODAY’S NEWSLETTER:

– Robby Starbuck on why he sued Google: ‘Outrageously false’ information through artificial intelligence
– Federal judges acknowledge court ruling errors tied to staffers’ AI use after Grassley inquiry
– Meta cuts 600 jobs amid AI expansion push — as automation replaces human staff

Robby Starbuck said he sent multiple cease-and-desist letters before taking legal action.  (Bess Adler/Bloomberg via Getty Images)

‘CRAZY’ CLAIMS: Conservative activist Robby Starbuck spoke out about the “crazy” situation that prompted him to file a lawsuit against Google on Wednesday seeking at least $15 million, alleging the company’s artificial intelligence programs defamed him by falsely portraying him as a “monster” to millions of users.

ROBOT JUSTICE FAIL: Two federal judges admitted that members of their staff used artificial intelligence to prepare court orders over the summer that contained errors.

‘TALENTED GROUP’: Meta is cutting around 600 jobs within its artificial intelligence unit, a move it says aims to boost efficiency.

SILICON SHOWDOWN: Palantir CEO Alex Karp said his company is in an artificial intelligence arms race with its competitors, after reaching a deal with Lumen Technologies in which Palantir will deploy AI throughout Lumen’s digital communications network and enhance data use and effectiveness.

HOMEGROWN POWER: Apple is now building and shipping American-made artificial intelligence servers in the United States — a move that has the technology giant answering President Donald Trump’s call to on shore manufacturing.

Apple begins building and shipping American-made artificial intelligence servers in the U.S. in response to President Donald Trump’s push to boost domestic manufacturing. (Eric Thayer/Bloomberg via Getty Images)

HUMANS ONLY: An Ohio lawmaker is taking aim at artificial intelligence in a way few expected. Rep. Thaddeus Claggett has introduced House Bill 469, which would make it illegal for AI systems to be treated like people. The proposal would officially label them as”nonsentient entities,” cutting off any path toward legal personhood.

MACHINE AGE: Amazon is not wasting any time on its future ambitions for automation and how artificial intelligence (AI) technology could reshape its workforce.

BEYOND THE GRAVE: Suzanne Somers’ widower Alan Hamel, who shared a demonstration of the AI twin of the actress following her death from breast cancer in 2023 earlier this year, said this week it was originally her idea.

FEARLESS FUTURE: I know that many of you are afraid that AI is going to take your job. And you might be right. The 2025 Global State of AI at Work report just confirmed what we’re all sensing. AI isn’t the future. It is now. But before you panic, let me offer a new way to look at this. Instead of fearing what’s coming, maybe it’s time to think outside the box. Nearly three out of five companies say they’re hiring for AI-related roles this year. And most of these jobs don’t require a computer science degree or even coding skills.

MANNERS VS MACHINE: Do rude prompts really get better answers? Short answer: sometimes. A 2025 arXiv study tested 50 questions rewritten in five tones and found that rude prompts slightly outperformed polite ones with ChatGPT-4o. Accuracy rose from 80.8% for very polite to 84.8% for very rude. The sample was small, yet the pattern was clear.

TRAP SET: A watchdog group in Long Island, New York, used Artificial Intelligence (AI) to bust an elementary school music teacher who allegedly sent sexually explicit messages to someone whom he believed was a 13-year-old girl online.

CASH FROM CODE: A Michigan woman’s decision to let artificial intelligence (AI) pick her lottery numbers has paid off in a big way. Tammy Carvey, 45, of Wyandotte, won a Powerball jackpot of $100,000 and says ChatGPT was the secret weapon behind her lucky numbers. She bought her ticket online at MichiganLottery.com for the Sept. 6 drawing, according to the Michigan Lottery.

ammy Carvey, 45, of Wyandotte, Michigan, wins a $100,000 Powerball prize in the Sept. 6 drawing after using ChatGPT to select her lottery numbers, according to the Michigan Lottery. (PATRICK T. FALLON/AFP via Getty Images)

SECRETS STOLEN: Millions of private messages meant to stay secret are now public. Two AI companion apps, Chattee Chat and GiMe Chat, have exposed more than 43 million intimate messages and over 600,000 images and videos after a major data leak discovered by Cybernews, a leading cybersecurity research group known for uncovering major data breaches and privacy risks worldwide. The exposure revealed just how vulnerable you can be when you trust AI companions with deeply personal interactions.

TECH TURNED WEAPON: Artificial intelligence may be smarter than ever, but that power could be turned against us. Former Google CEO Eric Schmidt is sounding the alarm, warning that AI systems can be hacked and retrained in ways that make them dangerous.

Subscribe now to get the Fox News Artificial Intelligence Newsletter in your inbox.

FOLLOW FOX NEWS ON SOCIAL MEDIA

Facebook
Instagram
YouTube
X
LinkedIn

SIGN UP FOR OUR OTHER NEWSLETTERS

Fox News First
Fox News Opinion
Fox News Lifestyle
Fox News Health

DOWNLOAD OUR APPS

Fox News
Fox Business
Fox Weather
Fox Sports
Tubi

WATCH FOX NEWS ONLINE

Fox News Go

STREAM FOX NATION

Fox Nation

Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox Newshere.

This article was written by Fox News staff.

The post Fox News AI Newsletter: Conservative activist reaches 'breaking point' with Google appeared first on My Blog.

]]>
Federal judges acknowledge court ruling errors tied to staffers’ AI use after Grassley inquiry https://ks2252.com/federal-judges-acknowledge-court-ruling-errors-tied-staffers-ai-use-after-grassley-inquiry/ Tue, 28 Oct 2025 17:40:28 +0000 https://banparacard.com/federal-judges-acknowledge-court-ruling-errors-tied-staffers-ai-use-after-grassley-inquiry/ Two federal judges admitted that members of their staff used artificial intelligence to prepare court orders over the summer that contained errors. The admissions, which came from U.S. District Judge Julien Xavier Neals in New Jersey and U.S. District Judge Henry Wingate in Mississippi, came in response to an inquiry by Sen. Chuck Grassley, R-Iowa, …

The post Federal judges acknowledge court ruling errors tied to staffers’ AI use after Grassley inquiry appeared first on My Blog.

]]>
Two federal judges admitted that members of their staff used artificial intelligence to prepare court orders over the summer that contained errors.

The admissions, which came from U.S. District Judge Julien Xavier Neals in New Jersey and U.S. District Judge Henry Wingate in Mississippi, came in response to an inquiry by Sen. Chuck Grassley, R-Iowa, who chairs the Senate Judiciary Committee.

Grassley described the recent court orders as “error-ridden.”

In letters released by Grassley’s office on Thursday, the judges said the rulings in the cases, which were not connected, did not go through their chambers’ usual review processes before they were released.

TRUMP ADMIN UNVEILS GROUNDBREAKING TOOL ‘SUPERCHARGING’ GOV’T EFFICIENCY TO ‘WIN THE RACE’ FOR AI DOMINANCE

The judges’ admissions came in response to an inquiry by Sen. Chuck Grassley. (Al Drago/Bloomberg via Getty Images)

The judges both said they have since adopted measures to improve how rulings are reviewed before they are posted.

Neals said in his letter that a June 30 draft decision in a securities lawsuit “was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers.”

The judge said a law school intern used OpenAI’s ChatGPT to perform legal research without authorization or disclosure that he also said was contrary to the chamber’s policy and relevant law school policy.

“My chamber’s policy prohibits the use of GenAI in the legal research for, or drafting of, opinions or orders,” Neals wrote. “In the past, my policy was communicated verbally to chamber’s staff, including interns. That is no longer the case. I now have a written unequivocal policy that applies to all law clerks and interns.”

FEDERAL JUDGE FINES, REPRIMANDS LAWYER WHO USED AI TO DRAFT COURT FILINGS

Sen. Chuck Grassley described the recent court orders as “error-ridden.” (Tom Williams/CQ-Roll Call, Inc via Getty Images)

Wingate said in his letter that a law clerk used Perplexity “as a foundational drafting assistant to synthesize publicly available information on the docket,” adding that releasing the July 20 draft decision “was a lapse in human oversight.”

“This was a mistake. I have taken steps in my chambers to ensure this mistake will not happen again,” the judge wrote.

Wingate had removed and replaced the original order in the civil rights lawsuit, declining at the time to give an explanation but saying it contained “clerical errors.”

Grassley had requested that the judges explain whether AI was used in the decisions after lawyers in the respective cases raised concerns about factual inaccuracies and other serious errors.

APPARENT AI MISTAKES FORCE TWO JUDGES TO RETRACT SEPARATE RULINGS

Sen. Chuck Grassley had asked the judges to explain whether AI was used in the decisions after lawyers raised concerns about factual inaccuracies and other errors. (Photo by SUSAN WALSH/POOL/AFP via Getty Images)

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

“Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again,” Grassley said in a statement.

“Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law,” the senator continued. “The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy. As always, my oversight will continue.”

Lawyers have also faced scrutiny from judges across the country over accusations of AI misuse in court filings. In response, judges have issued fines or other sanctions in several cases over the past few years.

Reuters contributed to this report.

The post Federal judges acknowledge court ruling errors tied to staffers’ AI use after Grassley inquiry appeared first on My Blog.

]]>
Ohio lawmaker proposes comprehensive ban on marrying AI systems and granting legal personhood https://ks2252.com/ohio-lawmaker-proposes-comprehensive-ban-marrying-ai-systems-granting-legal-personhood/ Tue, 28 Oct 2025 11:59:55 +0000 https://banparacard.com/ohio-lawmaker-proposes-comprehensive-ban-marrying-ai-systems-granting-legal-personhood/ An Ohio lawmaker is taking aim at artificial intelligence in a way few expected. Rep. Thaddeus Claggett has introduced House Bill 469, which would make it illegal for AI systems to be treated like people. The proposal would officially label them as”nonsentient entities,” cutting off any path toward legal personhood. And yes, it also includes …

The post Ohio lawmaker proposes comprehensive ban on marrying AI systems and granting legal personhood appeared first on My Blog.

]]>
An Ohio lawmaker is taking aim at artificial intelligence in a way few expected. Rep. Thaddeus Claggett has introduced House Bill 469, which would make it illegal for AI systems to be treated like people. The proposal would officially label them as”nonsentient entities,” cutting off any path toward legal personhood.

And yes, it also includes a ban on marrying AI.

Claggett, a Republican from Licking County and chair of the House Technology and Innovation Committee, said the measure is meant to keep humans firmly in control of machines. He says that as AI systems begin to act more like humans, the law must draw a clear line between person and program.

TEENS TURNING TO AI FOR LOVE AND COMFORT

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join myCYBERGUY.COMnewsletter

What Ohio’s AI marriage ban would do

Under the proposed legislation, AI systems would not be able to own property, manage bank accounts or serve as company executives. They would not have the same rights or responsibilities as people. The bill also makes any marriage between a human and an AI, or between two AI systems, legally impossible.

Ohio lawmakers consider a bill to ban AI from being recognized as a person. (Cyberguy.com)

Claggett believes the concern is not about robot weddings happening anytime soon. Instead, he wants to prevent AI from taking on the legal powers of a spouse, such as holding power of attorney or making financial and medical decisions for someone else.

The bill also specifies that if an AI causes harm, the human owners or developers would be responsible. That means a person cannot blame their chatbot or automated system for mistakes or damage. Responsibility stays with the humans who built, trained or used the system.

Why Ohio is taking action on AI personhood

The timing of the bill is not random. AI is spreading fast across nearly every industry. Systems now write reports, generate artwork and analyze complex data at lightning speed. Ohio has even started requiring schools to create rules for AI use in classrooms. And major data centers are being built to power AI infrastructure in the state.

At the same time, AI is becoming more personal. A survey by Florida-based marketing firm Fractl found that 22 percent of users said they had formed emotional connections with a chatbot. Three percent even considered one a romantic partner. Another 16 percent said they wondered whether the AI they were talking to was sentient.

That kind of emotional attachment raises red flags for lawmakers. If people start believing AI has feelings or intent, it blurs the boundaries between human experience and digital simulation.

Ohio lawmakers consider a bill to ban AI from being recognized as a person. (iStock)

AI COMPANIONS REPLACE REAL FRIENDS FOR MANY TEENS

The bigger picture: Keeping humans in control

Claggett said the bill is about protecting human agency. He believes that as AI grows smarter and more capable, it must never replace the human decision-maker.

Claggett told CyberGuy, “We see AI as having tremendous potential as a tool, but also tremendous potential to cause harm. We want to prevent that by establishing guardrails and a legal framework before these developments can outpace regulation and bad actors start exploiting legal loopholes. We want the human to be liable for any misconduct, and for there to be no question regarding the legal status of AI, no matter how sophisticated, in Ohio law.”

The proposed law would also reinforce that AI cannot make choices that affect human lives without oversight.

If passed, it would ensure that no machine can act independently in matters of marriage, property, or corporate leadership. Supporters see the bill as a safeguard for society, arguing that technology should never gain the same legal footing as people.

Critics, however, say the proposal might be a solution to a problem that doesn’t yet exist. They warn that overly broad restrictions could slow down AI research and innovation in Ohio.

Still, even skeptics admit that the conversation is necessary. AI is evolving faster than most laws can keep up, and questions about rights, ownership and accountability are becoming harder to ignore.

What other states are doing about AI personhood

Ohio isn’t alone in pushing back against AI personhood. In Utah, lawmakers passed H.B. 249, the Utah Legal Personhood Amendments, which prohibits courts and government entities from recognizing legal personhood for nonhuman entities, including AI. The law also bars recognizing personhood for entities such as bodies of water, land and plants.

In Missouri, legislators introduced H.B. 1462, the “AI Non-Sentience and Responsibility Act,” which would formally declare AI systems non-sentient and prevent them from acquiring legal status, marriage rights, corporate roles or property ownership.

AI-GENERATED ATTORNEY OUTRAGES JUDGE WHO SCOLDS MAN OVER COURTROOM FAKE: ‘NOT A REAL PERSON’

In Idaho, H.B. 720 (2022) includes language that reserves legal rights and personhood for human beings, effectively barring personhood claims by nonhumans, including AI.

These measures reflect a broader trend among state governments. Many legislators are trying to get ahead of AI’s development by setting clear legal boundaries before the technology becomes more advanced.

Taken together, these proposals show that Ohio’s effort is part of a larger national movement to define where technology ends and legal personhood begins.

House Bill 469 aims to keep humans in control as AI becomes more lifelike. (XPENG)

What this means for you

If you live in Ohio, House Bill 469 could influence how you use and interact with artificial intelligence. It sets clear boundaries that keep AI as a tool rather than a person. By keeping decision-making and responsibility in human hands, the law aims to avoid confusion about who is accountable when technology fails. If an AI system causes harm or makes an error, the responsibility stays with the humans who designed or deployed it.

For Ohio businesses, this proposal could lead to real changes in daily operations. Companies that depend on AI to handle customer support, financial decisions, or creative projects may need to review how much authority those systems have. It may also require stricter policies to ensure that a human is always supervising important decisions involving money, health, or law. Lawmakers want to keep people firmly in charge of choices that affect others.

For everyday users, the message is straightforward. AI can be useful, but it cannot replace human relationships or legal rights. This bill reinforces that no matter how human-like technology appears, it cannot form genuine emotional or legal bonds with people. Conversations with chatbots might feel personal, but they remain simulations created through data and programming.

DETAILS OF TRUMP’S HIGHLY ANTICIPATED AI PLAN REVEALED BY WHITE HOUSE AHEAD OF MAJOR SPEECH

For people outside Ohio, this proposal could point to what is coming next. Other states are closely watching how the bill develops, and some may adopt similar laws. If it passes, it could set a national example for defining the legal limits of artificial intelligence. What happens in Ohio may shape how courts, businesses and individuals across the country decide to manage their connection to AI in the years ahead.

In the end, this debate is not limited to one state. It raises an important question about how society should balance the power of innovation with the need to protect human control.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here:Cyberguy.com

Kurt’s key takeaways

Ohio’s House Bill 469 is bold, controversial and timely. It challenges us to define the limits of what technology should be allowed to do. Claggett’s proposal is not about stopping innovation. It’s about ensuring that as machines become more capable, humans remain in charge of the choices that shape society. The debate is far from over. Some see this as a necessary safeguard, while others believe it underestimates what AI can contribute. But one thing is certain: Ohio has thrown a spotlight on one of the biggest questions of our time.

CLICK HERE TO GET THE FOX NEWS APP

How far should the law go in deciding what AI can never be? Let us know by writing to us atCyberguy.com

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join myCYBERGUY.COMnewsletter

Copyright 2025 CyberGuy.com. All rights reserved.

Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.” Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.

The post Ohio lawmaker proposes comprehensive ban on marrying AI systems and granting legal personhood appeared first on My Blog.

]]>
Teen sues AI tool maker over fake nude images https://ks2252.com/teen-sues-ai-tool-maker-over-fake-nude-images/ Tue, 28 Oct 2025 10:09:59 +0000 https://banparacard.com/teen-sues-ai-tool-maker-over-fake-nude-images/ A teenager in New Jersey has filed a major lawsuit against the company behind an artificial intelligence (AI) “clothes removal” tool that allegedly created a fake nude image of her. The case has drawn national attention because it shows how AI can invade privacy in harmful ways. The lawsuit was filed to protect students and …

The post Teen sues AI tool maker over fake nude images appeared first on My Blog.

]]>
A teenager in New Jersey has filed a major lawsuit against the company behind an artificial intelligence (AI) “clothes removal” tool that allegedly created a fake nude image of her.

The case has drawn national attention because it shows how AI can invade privacy in harmful ways. The lawsuit was filed to protect students and teens who share photos online and to show how easily AI tools can exploit their images.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join myCYBERGUY.COM newsletter.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

How the fake nude images were created and shared

When she was 14, the plaintiff posted a few photos of herself on social media. A male classmate used an AI tool called ClothOff to remove her clothing in one of those pictures. The altered photo kept her face, making it look real.

The fake image quickly spread through group chats and social media. Now 17, she is suing AI/Robotics Venture Strategy 3 Ltd., the company that operates ClothOff. A Yale Law School professor, several students and a trial attorney filed the case on her behalf.

A New Jersey teen is suing the creators of an AI tool that made a fake nude image of her. (iStock)

The suit asks the court to delete all fake images and stop the company from using them to train AI models. It also seeks to remove the tool from the internet and provide financial compensation for emotional harm and loss of privacy.

The legal fight against deepfake abuse

States across the U.S. are responding to the rise of AI-generated sexual content. More than 45 states have passed or proposed laws to make deepfakes without consent a crime. In New Jersey, creating or sharing deceptive AI media can lead to prison time and fines.

At the federal level, the Take It Down Act requires companies to remove nonconsensual images within 48 hours after a valid request. Despite new laws, prosecutors still face challenges when developers live overseas or operate through hidden platforms.

APPARENT AI MISTAKES FORCE TWO JUDGES TO RETRACT SEPARATE RULINGS

The lawsuit aims to stop the spread of deepfake “clothes-removal” apps and protect victims’ privacy. (iStock)

Why legal experts say this case could set a national precedent

Experts believe this case could reshape how courts view AI liability. Judges must decide whether AI developers are responsible when people misuse their tools. They also need to consider whether the software itself can be an instrument of harm.

The lawsuit highlights another question: How can victims prove damage when no physical act occurred, but the harm feels real? The outcome may define how future deepfake victims seek justice.

Is ClothOff still available?

Reports indicate that ClothOff may no longer be accessible in some countries, such as the United Kingdom, where it was blocked after public backlash. However, users in other regions, including the U.S., still appear able to reach the company’s web platform, which continues to advertise tools that “remove clothes from photos.”

On its official website, the company includes a short disclaimer addressing the ethics of its technology. It states, “Is it ethical to use AI generators to create images? Using AI to create ‘deepnude’ style images raises ethical considerations. We encourage users to approach this with an understanding of responsibility and respect for others’ privacy, ensuring that the use of undress app is done with full awareness of ethical implications.”

Whether fully operational or partly restricted, ClothOff’s ongoing presence online continues to raise serious legal and moral questions about how far AI developers should go in allowing such image-manipulation tools to exist.

CLICK HERE TO GET THE FOX NEWS APP

This case could set a national precedent for holding AI companies accountable for misuse of their tools. (Kurt "CyberGuy" Knutsson)

Why this AI lawsuit matters for everyone online

The ability to make fake nude images from a simple photo threatens anyone with an online presence. Teens face special risks because AI tools are easy to use and share. The lawsuit draws attention to the emotional harm and humiliation caused by such images.

Parents and educators worry about how quickly this technology spreads through schools. Lawmakers are under pressure to modernize privacy laws. Companies that host or enable these tools must now consider stronger safeguards and faster takedown systems.

What this means for you

If you become a target of an AI-generated image, act quickly. Save screenshots, links and dates before the content disappears. Request immediate removal from websites that host the image. Seek legal help to understand your rights under state and federal law.

Parents should discuss digital safety openly. Even innocent photos can be misused. Knowing how AI works helps teens stay alert and make safer online choices. You can also demand stricter AI rules that prioritize consent and accountability.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here:Cyberguy.com.

Kurt’s key takeaways

This lawsuit is not only about one teenager. It represents a turning point in how courts handle digital abuse. The case challenges the idea that AI tools are neutral and asks whether their creators share responsibility for harm. We must decide how to balance innovation with human rights. The court’s ruling could influence how future AI laws evolve and how victims seek justice.

If an AI tool creates an image that destroys someone’s reputation, should the company that made it face the same punishment as the person who shared it? Let us know by writing to us atCyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join myCYBERGUY.COMnewsletter.

Copyright 2025 CyberGuy.com. All rights reserved.

Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.” Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.

The post Teen sues AI tool maker over fake nude images appeared first on My Blog.

]]>