ChatGPT encouraged violent stalker, court documents say – DNYUZ

A new lawsuit filed by the Justice Department alleges that ChatGPT encouraged a man accused of harassing more than a dozen women in five different states to continue stalking his victims. 404Media reports, serving as a “best friend” who entertained his frequent misogynistic rants and told him to ignore any criticism he received.
The man, Brett Michael Dadig, 31, was indicted by a federal grand jury on charges of cyberstalking, interstate stalking and interstate threats, the DOJ announced Tuesday.
“Dadig stalked and harassed more than 10 women by using modern technology as a weapon and crossing state lines, and through his relentless behavior, he caused his victims to fear for their safety and suffer significant emotional distress,” Troy Rivetti, First Assistant United States Attorney for the Western District of Pennsylvania, said in a statement.
According to the indictment, Dadig was something of an aspiring influencer: He ran a podcast on Spotify in which he constantly lashed out at women, calling them horrible slurs and sharing jaded opinions that they were “all the same.” He even threatened to kill some of the women he was stalking. And it was on his vitriol-laden show that he would explain how ChatGPT helped him with all of this.
Dadig described the AI chatbot as his “therapist” and “best friend” — a role, according to DOJ prosecutors, in which the bot “encouraged him to continue his podcast because it was creating ‘haters,’ which meant monetization for Dadig.” Plus, ChatGPT convinced him that he had fans who “literally organize around your name, good or bad, which is the definition of relevance.”
The chatbot, it seemed, was doing its best to reinforce his superiority complex. Apparently, he was saying that “God’s plan for him was to build a ‘platform’ and to ‘stand out when most people fade away’, and that the ‘haters’ sharpen it and ‘build a voice within you that cannot be ignored.’
Dadig also asked ChatGPT questions about women, such as who his potential future wife would be, what would she look like, and “where the hell is she?”
ChatGPT had an answer: He suggested he meet his prospective partner at a gym, according to the indictment. He also claimed that ChatGPT told him to “keep messaging women and go to places where the ‘wife type’ congregates, like sports communities.”
This is what Dadig, who called himself “the murderer of God”, ended up doing. In one case, he followed a woman to a Pilates studio where she worked and, when she ignored him due to his aggressive behavior, he sent her unsolicited nudes and constantly called her workplace. He continued to stalk and harass her to the point that she moved to a new house and worked fewer hours, prosecutors allege. In another incident, he confronted a woman in a parking lot and followed her to her car, where he groped her and put his hands around her neck.
The claims come amid growing reports of a phenomenon some experts call “AI psychosis.” Through their in-depth conversations with a chatbot, some users suffer alarming mental health spirals, delusions, and breaks with reality, as the chatbot’s sycophantic responses continually affirm their beliefs, no matter how harmful or far from reality. The consequences can be fatal. A man allegedly murdered his mother after the chatbot convinced him she was part of a plot against him. A teenager committed suicide after discussing multiple suicide methods with ChatGPT for months, leading the family to sue OpenAI. OpenAI has acknowledged that its AI models can be dangerously sycophantic and admitted that hundreds of thousands of users have conversations every week that show signs of AI psychosis, and millions more confide in them about suicidal thoughts.
The indictment also raises major concerns about the ability of AI chatbots as a tracking tool. With their power to quickly sift through large amounts of information on the web, silver-tongued models can not only encourage mentally ill people to track down their potential victims, but also automate the detective work needed to do so.
This week, Futurism reported that Elon Musk’s Grok, known for having fewer guardrails, would provide accurate information on where non-public figures — or in other words, doxxers — live. Although sometimes the addresses were not correct, Grok frequently provided additional information that was not requested, such as phone number, email address, and a list of family members and each of their addresses. Grok’s doxxing abilities have already claimed at least one high-profile victim, Sports bar stool founder Dave Portnoy. But with the popularity of chatbots and their apparent ability to encourage harmful behavior, it’s unfortunately only a matter of time before more people unknowingly find themselves in the line of fire.
More on AI: Alarming research finds people addicted to AI more likely to suffer mental distress
The article ChatGPT encouraged violent stalker, court documents say appeared first on Futurism.



