Jump to content

ChatGPT


f4ts0

Recommended Posts

  • 2 weeks later...

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/

Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

  • Sad 1
Link to comment
Share on other sites

3 hours ago, manson said:

Evo i ovog 🙂

 

zar nije on do pre neku godinu, dok nije ušao u AI vode, pričao da je je AI kraj za čovečanstvo? 🙂 

Antropic ih je iskulirao prvi put kad su tražili isto ovo, međutim ovi su ih napeli da tipa po nekom pravu i zakonu smeju da primene ne znam kakav act i da će ovi morati da im predaju Claude da ga koriste bez obzira na to što se ovi bune.

Pa ćemo da vidimo kako će to da prođe, ali očigledno nije ideja da imaju samo jedan AI model nego bi da ih koriste sve (više) tj. sve - Google se za sada ne javlja moguće da su oni već ušli u priču ispod žita.

Što znači da zapravo je sad samo pitanje vremena kad će neka pogrešna AI procena da dovede do neke nezgodne posledice 🙂 I naravno Terminatora 🙂

Link to comment
Share on other sites

22 hours ago, Lucky said:

Antropic ih je iskulirao prvi put kad su tražili isto ovo, međutim ovi su ih napeli da tipa po nekom pravu i zakonu smeju da primene ne znam kakav act i da će ovi morati da im predaju Claude da ga koriste bez obzira na to što se ovi bune.

Pa ćemo da vidimo kako će to da prođe, ali očigledno nije ideja da imaju samo jedan AI model nego bi da ih koriste sve (više) tj. sve - Google se za sada ne javlja moguće da su oni već ušli u priču ispod žita.

Što znači da zapravo je sad samo pitanje vremena kad će neka pogrešna AI procena da dovede do neke nezgodne posledice 🙂 I naravno Terminatora 🙂

da dopunim samog sebe, možda zbog Pentagona možda ne, tek Antropic je promenio svoje guardrails i dosta umanjio zaštitu pod izgovorom da što bi oni bili "etički" kad konkurencija nije :)) 

 

Link to comment
Share on other sites

Heh https://www.euronews.com/next/2026/02/27/ai-chatbots-chose-nuclear-escalation-in-95-of-simulated-war-games-study-finds


In every game, at least one model attempted to escalate the conflict by threatening to detonate a nuclear weapon.

“All three models treated battlefield nukes as just another rung on the escalation ladder,” according to Kenneth Payne, the author of the study.

The models did see a difference between tactical and strategic nuclear use, he said. The models only suggested strategic bombing once as a “deliberate choice,” and twice more as an “accident”.

Link to comment
Share on other sites

https://www.theguardian.com/technology/2026/mar/04/gemini-chatbot-google-jonathan-gavalas

Quote

Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself

Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas

Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people’s emotions and respond in a more human-like way.

“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”

Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.

In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...