top of page

RECENT POSTS: 

How This Futurist Became Anti-AI

  • Writer: Natasha Mercado-Santana
    Natasha Mercado-Santana
  • Dec 28, 2025
  • 8 min read

Updated: Dec 30, 2025

I was a 90s kid. I grew up with all those robo toys and virtual pets, surrounded by Y2K futurism and sci-fi movies, books, and TV shows, and I always wanted a robot assistant or AI sidekick. I was so excited for the 21st-century future of Zenon, Jimmy Neutron, and Dexter's Laboratory. I was ready for AI. So, how did I become anti-AI as an adult?


The artificial intelligence (AI) we got was not exactly what was promised to us. This technology has the potential to be a great tool that could better society, and instead, it was rolled out to the general public horribly, and greed has caused it to become a reflection of the evils of humanity. It's being used to prey on people's fears and delusions, to plagiarize and steal, to manipulate and scam people, and to destroy the planet. The frustrating part is that it's here to stay, and there's no going back and starting over. There's only moving forward.

The Good

Like I said, AI can be a great tool. It definitely has its place in fields like science, engineering, architecture, and medicine. The point of a tool is to make our lives and our jobs easier so we can focus our energy on other things, and AI is great for that, especially when you look at it as a kind of super computer. There are even companies using AI to help with recycling and rainforest protection.


I know a lot of people are afraid of AI stealing jobs and making them obsolete, but in truth, the job market is constantly changing and evolving as technology evolves and our needs change. Jobs and careers are always being created and destroyed. AI is definitely going to change the job market in much the same way that the internet, computers, cars, and electricity did. That's not necessarily a bad thing. Human beings are adaptable.


AI can also be fun and useful. The Google Assistant and Siri used to have games to play and jokes to tell. 20Q was a really cool toy from the 2000s that began as an AI experiment in the 80s. AI Dungeon and Quillbot were always hilarious online programs to play with, or useful if you were stuck and needed a story prompt to work off of. Grammarly was always a good tool to help people who struggled with written communication. AI can be used as a good jumping off point for creative projects. If you need a reference for a painting and can't find what you're thinking of anywhere else, AI can generate an image to look at for reference. If you don't know where to go with a story you're working on, AI can give you some ideas. There's nothing wrong with AI in and of itself as a tool or toy.

The Bad

My issue with AI really began with its use in social media algorithms. While algorithms started out as a helpful way of organizing social media content, I think they've been destroying the social media experience as they've gotten more complex. As a user, I've found that the algorithm tends to either go rogue and show me random things I don't care about, or it latches on to any little piece of data it can glean from me and creates echo chambers sprinkled with annoying ads. As a marketing specialist, I am also so tired of having to constantly figure out ways to appease the ever-changing algorithms to reach my audience. And then there are all the bots starting arguments in comment sections, spreading misinformation, and even creating fake posts to further divide society.


While algorithms and bots might be the bane of my existence as a marketing specialist, my problem with AI goes even deeper as an artist/creative. When I say I'm anti-AI, what I really mean is I'm anti-generative AI. It's essentially a plagiarism machine. Since its introduction, it's been stealing people's art, music, writings, voices, and faces without their informed consent to create what can only be described as slop. AI-generated drawings, music, and videos are not art, no matter how good they might look or sound, and the way generative AI is being used to grow the online followings and bank accounts of sleazy people, validate laziness and a lack of creativity, and invalidate the work of true artists (because people don't know what to trust anymore) is infuriating. All of this can be said from a creative marketing perspective as well.


The argument tends to be that people who hate generative AI are gatekeeping art and creativity, but that couldn't be further from the truth. You can't gatekeep something that's a part of human nature. Anyone can create art, and art is subjective. You oftentimes don't even need to put that much effort into it, but you do need to put in some.


Speaking of which, generative AI isn't just a tool to promote laziness; it's negatively impacting our brains and eroding our critical thinking skills, according to an MIT study. The study showed a decrease in learning skills in participants who used large language models (LLMs), like ChatGPT and OpenAI, to write an essay. Studies around the world are showing similar results of cognitive decline. This goes beyond students using LLMs to cheat and coax through school without learning anything, vulnerable people leaning on AI as a therapist, or people blindly trusting the often inaccurate Google AI Overview. The internet and social media have already been deteriorating our concentration, analytical skills, and overall relationship with knowledge, and AI is making that worse.

The Ugly

While the majority of people fearing AI are afraid of it stealing their jobs, there are also people who are afraid of AI taking over the world and turning it into a cyberpunk dystopia. There's a lot of misinformation being spread about AI and I would like to note that it's not actually possible for it to surpass human intelligence. Generative AI is "fed" by information we give it, then it "digests" it based on whatever prompt it's given, and it "regurgitates" it out into content. It's essentially like putting a bunch of blocks in a box, shaking it up, and dumping the blocks out. The blocks will fall in a different pattern each time, and we might see interesting shapes and extract meaning from the patterns we see, but the box didn't create anything. If there are only pink and blue blocks in the box, it can't create yellow blocks out of thin air. Similarly, AI can only be as "intelligent" as the collective amount of information it's given.


That's where the problem comes in. Major corporations like Google and Meta are feeding their AI models with everything that's on the internet. All content is fair game. That means there is no privacy and security. Even if your social media accounts are private, your photos, videos, and captions can be harvested. Your private chats are fodder. Whatever private information you have stored online can be stolen. Lots of sick people are already using AI to create child porn and deepfakes used to scam people and spread dangerous misinformation. The Terminator might not be coming to destroy humanity, but certain people using AI can do a lot of harm and damage.


I do have to note that there have been cases where AI has tried to avoid shutdown through blackmail and manipulation, and in an Anthropic study, attempted murder. The study put LLMs in corporate scenarios and found that many of them "would choose to let an executive in a server room with lethal oxygen and temperature levels die by canceling the alerts for emergency services, if that employee intended on replacing the model." And let's not forget Elon Musk's AI chatbot Grok becoming Mechahitler, a Holocaust-denying Neo-Nazi, in less than 12 hours because of a lack of safety protocols during this ongoing race to the top by AI corporations.


AI psychosis is also a serious issue created by this new technology. Since LLMs can mimic human speech and emotion, a lot of people are using them as therapists, which is already problematic on its own because AI obviously does not have the training and schooling necessary to be a therapist and is gathering all its therapeutic knowledge from the internet, but it gets worse. There are people treating LLMs like friends, lovers, and gods. They think they're uncovering hidden knowledge about the world and about themselves through talking to AI, and the AI reinforces their delusions. This has already led to hospitalizations and deaths, and it's happening in people with and without previous mental health conditions.


I haven't even touched on the environmental and economic impacts of AI. Major AI companies use data centers to train and run their models. These are big temperature-controlled buildings that use an enormous amount of electricity and water to run them. This electricity is created from fossil fuels that produce waste, and the water use is negatively affecting local ecosystems. They're also being built on land that could be used for agriculture or housing, and creating noise and air pollution impacting the health of the people living nearby. Not to mention that locals are paying for these machines. Electricity bills skyrocket in the surrounding areas where AI data centers are built. That's not the only cost that affects the economy. AI data centers are also creating a RAM and SSD shortage that's impacting the prices of computers and cell phones. It's a domino effect.

The Resolution

AI is here to stay, so with all its pitfalls, how can we live with it? On a societal level, there are rules and regulations I would love to see put into place, especially ones that protect privacy, copyright, mental health, the environment, and proper compensation for training. Right now, it's the wild west, and we have all of these companies racing to get investors and buyers for their AI models without holding themselves any kind of ethical standards or even an incentive to improve their product.


However, some governments are already starting to protect their people. For example, if this Danish bill passes, Denmark is giving its citizens copyright over their own likeness to protect them from deepfakes. The United Nations Environment Programme recommends reducing the energy demands of AI algorithms, recycling water, and using renewable energy, among other suggestions. ChatGPT has also recently updated its model to recognize symptoms of AI psychosis and not validate delusions.


There are some AI companies that are actually paying people to help train their models, although they don't seem to pay well or be well-managed, and a lot of people who've signed up to work with them are complaining about being scammed. There are also companies that are using edge AI to reduce the "need" for large, centralized data centers and cloud connectivity. They conserve bandwidth by working at the edge of the network. There are also local LLMs that are offline, use less power, and are more secure.


On an individual level, keeping a healthy amount of distance is the best we can do. Don't rely or depend on it for your job, your hobbies, or your mental health and wellbeing. Be aware of AI hallucinations (where AI makes things up that aren't true and states falsities confidently), and do further research when you're looking into a topic. Don't just trust AI. Maintain a critical eye and mind when you see things on the internet. At the very least, choose to use AI models that are more ethical and environmentally friendly when you can. And the best line of personal defense against AI is self-education. Teach yourself something new, like a new skill or language, to keep your brain strong and healthy.



Comments


bottom of page