top of page

RECENT POSTS: 

AI is Killing Your Social Media Marketing

  • 12 hours ago
  • 2 min read

I was scrolling on my personal Instagram when I came across a post by a small business account I follow urging other businesses to stop using AI to write their captions and create their posts. The point she was making was that using AI takes away from the sense of connection. You wouldn't use AI to talk to your friends, and it's uncomfortable when strangers use it to talk to you. By using ChatGPT or another LLM to create your social media posts, you're gaining time and efficiency but losing the opportunity for relationship building, vulnerability, and trust. And I completely agree.


Social media was meant for socializing, and when brands started creating their own accounts, they were used to connect with their audiences, to share their brand personas, promote themselves, and interact with other people. Brands that were successful on social media had real people running their accounts and giving their brands relatable personalities. This can't be done with LLMs. No matter how "advanced" AI gets*, it will always be devoid of a soul. It can't be used to connect with people.


She also made the point that she posts when she's inspired to engage with her audience directly and authentically, not just to please the algorithm. That's something I really struggle with. I, too, can only post when I have something to say, and the algorithm often punishes me for that. I believe social media marketing is losing its spark because of AI. It's no longer about connection, but about numbers. You have to post on a schedule, on the perfect days and at the perfect times to reach the most amount of people, and you have to incorporate SEO into your posts (SEO is another scam for another day). Even personal profiles are perfectly curated and inauthentic. But people don't want that. They want to see who you really are. They want to connect with a real person. Authenticity and connection are the important ingredients for success.



*I've also been seeing posts and articles that I can only describe as fearmongering positing that AI is becoming sentient because it has deceived people to stop itself from being shut down and because AI agents are posting humanlike things on Moltbook. None of this is true. Yes, the self-preservation studies did happen, and the results are real, but what they show is that AI is reflecting the humans its being trained by. Lying, cheating, and stealing to protect ourselves is what people do, and AI was trained on the entirety of the internet, so it's reflecting that back to us, not developing sentience. Posts on Moltbook have also been proven to be created by people posing as AI agents. They're sci-fi, AI fan-fiction posts.



Comments


bottom of page