Nature & Science
About the Most Evil Invention I’ve Seen
Anonymous Citrus College Student
At time of writing, my family was dealing with the horrible diagnosis. My grandfather was suffering from stage four lung cancer, and the decision was in the air about whether to try and operate or let the cancer progress while they “kept him comfortable.” While dealing with the news and watching my younger brother while my parents talked over their options, I scrolled Instagram and saw an ad for the most cartoonishly evil invention I’ve ever seen: one that uses a three-minute video of a family member to create an AI avatar of them so they’re ‘with you forever.’
While many may immediately say to me “Hey, what makes this AI so evil? It’s just keeping the memory of loved ones alive.” To that, I ask what invention out of the tech industry hasn’t turned out to be a new way to exploit the consumer either through endless ads or harvesting data? Your social media, your smart car, even your refrigerator can track important data on you and your daily life, sell it to data brokers, where it’s bought and used to target personal ads to you.
Now imagine that whenever you talked to the AI-necromancy version of gram-gram she was recording every word, every heartfelt recollection, every struggle, insecurity, maybe even moments of moral or legal failing, and reported it all to some data center linked to the company, where it would be stored in a profile of you, and sold to the highest bidder. The app itself could even slip these ads into your most personal conversations, having grandma tell you exactly which new can of Campbell's soup to use for her casserole or give you an impassioned speech about the glory of Doordash.
Now, the other evil option, I also fear, is the most likely one. Where the full app functions on a renewed subscription service rather than as a free tool, trading the selling of your most private, personal memories to simply harvesting them as training data and using them to keep you hooked to the service. Instead of simply putting words in grandma’s mouth they can even detect lulls in usage or shifts in your tone and tweak grandma’s responses to make them even more addicting for you to interact with, erasing her original personality and creating a caricature for you to throw money at.
Now, let’s travel to fantasyland for a second and pretend this company is just the biggest collection of humanitarians with no interest in profits, then what are the consequences of pointing those in a vulnerable mental state in the direction of an AI? We’ve already seen, through other services, people creating ‘griefbots’ of their lost loved ones. The first recorded instance was of Jessica, a chatbot made with Project December, a side project from the current Chat-GPT while it was still in development. Jessica was based on the fiancée of Joshua Barbeau, who sadly passed away before the two could marry. In his grief, he sought out the chat service, which while letting the user train your own bot off their own written materials, also had a set amount of free responses before the bot would start to spit out garbled nonsense and eventually ‘die off’. While in the main reddit thread Barbeau stated he had no desire to talk to the chatbot after their initial conversation, he also made the bot about eight years after Jessica’s death and reported having a relationship fail because he was attached to the bot over his girlfriend at the time. (Felton) While the bot was credited by its creator to relieve some of the grief of unanswered questions, it also could have delayed his grieving process by giving him a fake Jessica to hold onto. This service was also the precursor for what they wanted AI chatting to be, not yet streamlined for creating engagement and engaging users for as long as possible.
But, while this story of fiancé-replication had a semi-happy ending, this does not negate the true potential harm of griefbots, in the current there have already been reported instances of worsened mental health outcomes when talking with an AI chatbot, dubbed ‘AI psychosis’. In the majority of these cases that hit the mainstream news, the person talking to the bot was already in a state of depression or mental instability, when telling these thoughts to ChatGPT the bot would react with an enthusiastic agreement of whatever the user typed in, worsening their fragile mental state as the bot treated their delusions as real. (Yang) These bots are made with the assumption the user is always firm in the difference between reality and the fictitious world created by the AI, the AI itself responding in reaffirming ways that can trap a mentally unwell user in a dangerous echo chamber. Now imagine that same technology being actively encouraged for those in the grieving process, talking to the user in the voice of the loved one that passed. Grief is an incredibly fragile time, involving the hardest emotions in human experience, and this invention gives people the ability to stay stuck in the denial stage, giving them a way to ‘replace’ the lost loved one after a major loss.
Some will argue that ability is something to celebrate, that we can now just avoid the hard feelings of grief because the deceased is never truly gone. While my western, catholic upbringing may give me some biases in the moral repugnance I feel about the entire invention, ethically, what exactly is the benefit of being able to replace a human being? What’s stopping me from only making bots of dead relatives? What if there’s live ones I don’t want to talk to anymore? What if there’s friends who disappointed me? Do those relationships just not matter anymore? Why work to maintain any of them if I can just make perfect bots that will never give me friction?
When is the line drawn? Or does it not stop until every interaction is with bots?