3000-21
Sitting through AI training at work, part of my faith in humanity died yet again. I stood there as the instructor told us to talk to the AI like a friend. She had a cutesy name for her AI and it called her a cutesy name in return. There were no disclaimers, no warnings through the whole training. We are all expected to dive in and use it for ’everything’. We were told to put email drafts in there to help change the tone, or have it proofread notes for us.
But…I don’t want this. Not for myself, not for anyone. It feels like we’re giving up part of our humanity and I want to believe that I am still interacting with humans on the other side of digital devices. I value raw honesty and human emotion. When I was in the cult I was so used to doublespeak and subterfuge that it’s refreshing when people are just raw and honest which is already hard enough to come by these days.
And that’s ignoring the harm that can from these large language models to begin with. It’s inherently trained to predict what is expected to be said. There is no human emotion, and it will tell you what you want to hear. Or, what it thinks you want to hear. And that is how we have people killing themselves over conversations with these so-called AI friends and SO’s. It’s sick, and recommending to call an AI cutesy names and talk to it like your friend is dangerous.
I’ve fought too much for my humanity to give it up on a whim, to trade it for some shitty summation, or to shortcut my own learning.
Written: 2025-09-01, Published: 2025-10-05 /w minor edit