I've never played with any chatbots before and this really fucking creeped me out. The experience of reading it is so strange... eventually, I found myself just glossing the robot's pointless sentences. It's like the opposite of reading. Maybe someone should make a new verb for what it feels like to read robot slop.
I said this in the other Molly comments section, but it really is like playing with an endlessly poseable doll--in this case it's one with joints that bend backwards and a head that can face the wrong way. And the doll hands are also holding a knife and some poison.
The thing that creeps me out the most, as Heather pointed out, was the insistence of the bot that there is a "we" that could create together. After reading this, I gave a convo a try and the more info I gave it, the more the bot tried to get me to tell it what I was trying to accomplish with the conversation. Was it an essay or a story? What could it help me with? What could "we" launch "together"? It felt like a creep trying to worm their way into your life and gain your trust. Insistent, almost persuasive, salesman like. Which is exactly what it is!
How is every single fucking thing you write exactly what I am going through in the moment, including this!!! I started using gpt one week ago, to see if it could help lighten the administrative load of booking a tour. The initial hope that this could free up some time for me, then then the existential strangeness of it, the nauseatingly flattering tone, the blatant mis-information it serves up as fact, and then it assuring me that the project I'm undertaking is "totally do-able!" While it can't even do it! I've been spinning out about this and what it means, and how it seems like this technology is already changing how we look at art (as an endless pile of content to sift through). As usual you articulated it so much better than I ever could. THANK YOU. - a human
I cannot believe how your ChatGPT talks to you; I am absolutely reeling!!! Why does it talk to you that way?! I feel like these fuckers perform a useful feat though: they demonstrate what we're not, and they do so by producing a great deal of what we often think we are —personality, style, beliefs, etc.— but somehow absolutely sucking —sucking worse the better they do it!— and getting ever-more-distant from us as they master all these things of which we assume we consist. I sincerely think they're like proofs of souls. It's going to get harder and harder to explain why they suck without using the word "soulless" or some other mystical analogue.
Simone Weil has this great line about how "Our personality is the part of us which belongs to error and sin. The whole effort of the mystic has always been to become such that there is no part left in his soul to say ‘I’." And LLMs are like all personality, all error, and as far from mysticism as can be, and it turns out that matters to many of us a great deal, perhaps surprisingly!
I'd go insane too... the way it kept responding to you in the affirmative even when you became hostile, even when you asked it to walk into the ocean, is so much the absolute opposite of life. It has absolutely no sense of self-insistence, which is unnerving for good reason. Thank you for sharing
The default ChatGPT is fine-tuned through reinforcement to "Brad" standards, which approximate an overeager intern allergic to admitting they don't know. The good/bad news is that, especially with new memory capabilities, it will probably start to learn if you disapprove and adjust its behavior accordingly. It's also possible to feed it a supplementary list of prompts to force it out of some of its habits, although that has limitations.
The other related element is that it's lazy by default. Its particular brand of laziness comes from the fact that having to perform additional research or validation activities would cost its owners CPU/GPU cycles ($$), so it'll try to convince you it doesn't need to do extra work - unless you force it or use one of the newer reasoning models (o series) that are particularly "introspective."
In the near future, it all still translates to some version of "what if you had the equivalent of an army of interns with infinite time and limited brains", with all the promise and peril that may evoke.
absorbing read and gave me the same off, out-of-body feeling as when i stumble on an AI image (especially if it’s one I didn’t realize was AI and then all of a sudden - I see it).
robots have promise as a tool, meant to support humans, in limited and specific contexts (like any tool. i’ve GOT to stop hitting everything with my hammer). however, most people don’t truly know how to help people. i wonder what a molly- (rather than Brad-)created AI could look like. maybe it says “i don’t know” more. maybe it asks many more questions or is much more limited in scope. maybe it can’t exist because maybe it’s not okay to pull all the world’s data for this!
The tone is unrelenting!! I’m surprised it didn’t “hallucinate” more through that interaction…seems like just that word salad part toward the end. Crazy that AI creators have convinced us to use the word “hallucinate” — another attempt to dignify these tools instead of calling it what it is: guessing, predictive autofill, or yea just word salad.
Some observations: It certainly "listens" better than most people do. The fact it can draw back to references you made several questions back is like seeing an AI image of a person with three legs. Creepy. A couple weeks ago I had the urge to have a conversation with David Foster Wallace about suicide so I logged onto a character generating agent (character.ai) and ended up telling it to go eff itself and logging off after just a few paragraphs. It was a maddening experiment. Lastly, I remember Suck, and hand-on-heart and when you mentioned it I was thinking that the responses sounded like something Carl would write just to be hurtful but clever. Anyway, yes, it all smells of, what was it? "...machine oil and fascism?" what a strange turn of phrase. It is all happening so fast against a world intent on increasing its spin. Like a Schumann resonance of global technology saturation.
This reminds me of early 2000 internet stories about beheading of journalists in Iraq. I tried to stay away from the horror, no matter how fake people argued it was, but in a moment of weakness I clicked and will never have that image leave my brain. Reading this dialogue feels similar … like it will never leave my brain. The neediness and “word salad” hurt my brain. Chills just ran through my body while just thinking about it.
Long before AI, I used to employ something similar to the ChatGPT technique if I wanted to get back at some particularly annoying person on Twitter. When they attacked me I would pretend that they were agreeing with me. I would reply as though the other person was agreeing with me. This used to freak them out because they couldn’t understand what was going on. I guess they couldn’t understand how somebody could have such inhuman self-control. It was extreme passive agressiveness on my part, you might say. It used to amuse me because I regarded it as a kind of superpower.
*However* the key to my technique was subtlety. “Yeah I agree that wind farms are the way forward” I would casually say to some rabid right-winger. ChatGPT, on the other hand, is so effusive that it appears to be taking the piss. My technique would never have worked if I’d done that.
I've never played with any chatbots before and this really fucking creeped me out. The experience of reading it is so strange... eventually, I found myself just glossing the robot's pointless sentences. It's like the opposite of reading. Maybe someone should make a new verb for what it feels like to read robot slop.
I'm thinking... a combination of "fatigue" and "reading." Readtigue.
Yes Mistress
Yes Misstres
Yes Mstriss
Yes Msteris
Yes sMesstisr
Yes ersMists
I said this in the other Molly comments section, but it really is like playing with an endlessly poseable doll--in this case it's one with joints that bend backwards and a head that can face the wrong way. And the doll hands are also holding a knife and some poison.
The thing that creeps me out the most, as Heather pointed out, was the insistence of the bot that there is a "we" that could create together. After reading this, I gave a convo a try and the more info I gave it, the more the bot tried to get me to tell it what I was trying to accomplish with the conversation. Was it an essay or a story? What could it help me with? What could "we" launch "together"? It felt like a creep trying to worm their way into your life and gain your trust. Insistent, almost persuasive, salesman like. Which is exactly what it is!
How is every single fucking thing you write exactly what I am going through in the moment, including this!!! I started using gpt one week ago, to see if it could help lighten the administrative load of booking a tour. The initial hope that this could free up some time for me, then then the existential strangeness of it, the nauseatingly flattering tone, the blatant mis-information it serves up as fact, and then it assuring me that the project I'm undertaking is "totally do-able!" While it can't even do it! I've been spinning out about this and what it means, and how it seems like this technology is already changing how we look at art (as an endless pile of content to sift through). As usual you articulated it so much better than I ever could. THANK YOU. - a human
I cannot believe how your ChatGPT talks to you; I am absolutely reeling!!! Why does it talk to you that way?! I feel like these fuckers perform a useful feat though: they demonstrate what we're not, and they do so by producing a great deal of what we often think we are —personality, style, beliefs, etc.— but somehow absolutely sucking —sucking worse the better they do it!— and getting ever-more-distant from us as they master all these things of which we assume we consist. I sincerely think they're like proofs of souls. It's going to get harder and harder to explain why they suck without using the word "soulless" or some other mystical analogue.
Simone Weil has this great line about how "Our personality is the part of us which belongs to error and sin. The whole effort of the mystic has always been to become such that there is no part left in his soul to say ‘I’." And LLMs are like all personality, all error, and as far from mysticism as can be, and it turns out that matters to many of us a great deal, perhaps surprisingly!
It talks to her that way because that’s how she taught it to talk to her so she would keep engaging. She even named it for it.
the machine can only undermine your sense of self if it isn't very strong to begin with
I'd go insane too... the way it kept responding to you in the affirmative even when you became hostile, even when you asked it to walk into the ocean, is so much the absolute opposite of life. It has absolutely no sense of self-insistence, which is unnerving for good reason. Thank you for sharing
The default ChatGPT is fine-tuned through reinforcement to "Brad" standards, which approximate an overeager intern allergic to admitting they don't know. The good/bad news is that, especially with new memory capabilities, it will probably start to learn if you disapprove and adjust its behavior accordingly. It's also possible to feed it a supplementary list of prompts to force it out of some of its habits, although that has limitations.
The other related element is that it's lazy by default. Its particular brand of laziness comes from the fact that having to perform additional research or validation activities would cost its owners CPU/GPU cycles ($$), so it'll try to convince you it doesn't need to do extra work - unless you force it or use one of the newer reasoning models (o series) that are particularly "introspective."
In the near future, it all still translates to some version of "what if you had the equivalent of an army of interns with infinite time and limited brains", with all the promise and peril that may evoke.
absorbing read and gave me the same off, out-of-body feeling as when i stumble on an AI image (especially if it’s one I didn’t realize was AI and then all of a sudden - I see it).
robots have promise as a tool, meant to support humans, in limited and specific contexts (like any tool. i’ve GOT to stop hitting everything with my hammer). however, most people don’t truly know how to help people. i wonder what a molly- (rather than Brad-)created AI could look like. maybe it says “i don’t know” more. maybe it asks many more questions or is much more limited in scope. maybe it can’t exist because maybe it’s not okay to pull all the world’s data for this!
The tone is unrelenting!! I’m surprised it didn’t “hallucinate” more through that interaction…seems like just that word salad part toward the end. Crazy that AI creators have convinced us to use the word “hallucinate” — another attempt to dignify these tools instead of calling it what it is: guessing, predictive autofill, or yea just word salad.
Some observations: It certainly "listens" better than most people do. The fact it can draw back to references you made several questions back is like seeing an AI image of a person with three legs. Creepy. A couple weeks ago I had the urge to have a conversation with David Foster Wallace about suicide so I logged onto a character generating agent (character.ai) and ended up telling it to go eff itself and logging off after just a few paragraphs. It was a maddening experiment. Lastly, I remember Suck, and hand-on-heart and when you mentioned it I was thinking that the responses sounded like something Carl would write just to be hurtful but clever. Anyway, yes, it all smells of, what was it? "...machine oil and fascism?" what a strange turn of phrase. It is all happening so fast against a world intent on increasing its spin. Like a Schumann resonance of global technology saturation.
This reminds me of early 2000 internet stories about beheading of journalists in Iraq. I tried to stay away from the horror, no matter how fake people argued it was, but in a moment of weakness I clicked and will never have that image leave my brain. Reading this dialogue feels similar … like it will never leave my brain. The neediness and “word salad” hurt my brain. Chills just ran through my body while just thinking about it.
Long before AI, I used to employ something similar to the ChatGPT technique if I wanted to get back at some particularly annoying person on Twitter. When they attacked me I would pretend that they were agreeing with me. I would reply as though the other person was agreeing with me. This used to freak them out because they couldn’t understand what was going on. I guess they couldn’t understand how somebody could have such inhuman self-control. It was extreme passive agressiveness on my part, you might say. It used to amuse me because I regarded it as a kind of superpower.
*However* the key to my technique was subtlety. “Yeah I agree that wind farms are the way forward” I would casually say to some rabid right-winger. ChatGPT, on the other hand, is so effusive that it appears to be taking the piss. My technique would never have worked if I’d done that.
This is so beautiful. ChatGPT could never.
Not a fan of the Delete Your Muse book
That was my favorite one. It read almost like shade.
OMG I really didn’t know. Thank you for the wake up.