This is the beginning of what I expect to be an ongoing deep dive series, as I continue to learn more about the good, the bad, and the ugly of AI. You can take the girl outta tech, but you can’t take the tech knowledge outta the girl, it seems. You’ll find tons of resources I’ve linked along the way to help write this series beyond my personal expertise.

Picture this: You’re on your second cup of coffee of the day (your substitute for lunch until your back-to-back meetings are over) and now you’re in your third company-wide training this quarter about “using AI more.” Your dog is barking, your Amazon delivery person is ringing the doorbell, and you’re o v e r w h e l m e d. While you’re bringing your package inside, you hear your company’s leader say you should all be using AI to write your emails. You sigh and think, hey maybe this can help me catch up today. You open your company’s AI tool of choice, drop in your notes from your previous meeting and type “can you write me a follow up email.” You spend the next 20 minutes editing down the output into something that reads more like any human wrote it, and includes only the main takeaways from your meeting, not all 20 bullet points of notes you dropped into the tool.

You could have just spent 10 mins drafting up a quick follow up email and sent it off all in the time it took you to try to comply with the company line. 🫠

I’m anti-AI as it exists in America today (namely chatbots and intrusive summary tools) for use by the general public, but I do think there could be some good use cases for AI at the macro level. For example, I’m all for using artificial intelligence for cancer detection. As it stands right now, however, for genpop to be using a suped-up version of SmarterChild to get therapy, medical, and legal advice…… it’s my professional opinion that we should all be deeply concerned.

My credentials to discuss AI: I worked at Google for 6.5 years and sold “AI for Marketing solutions” to Fortune 500 executives for two of those years. I was there as Google CEO Sundar Pichai mandated employees be beta testers for Bard AI (the earlier version of Gemini). I was there when our marketing teams suddenly pivoted from “machine learning algos” to “AI-powered algorithms”), immediately causing me to be skeptical about what “AI” even was in the current age because nothing changed technically, just the marketing changed. I was re-orged into a role where I was helping the biggest companies in America “prepare for the future of AI” when all that really meant was a repackaging a bunch of slides built by ex-consultants slapped with pretty Google logo and branding containing three year old info. And most recently, in a brief stint back in corporate, I participated in a three day sales kick off (read: internal conference) where we tested and built “AI Agents” and literally watched them all fail to be useful while our executives doubled down: “well they will be if we keep using them!”

Today I want to focus on the basics - what is AI and, at a high-level, why do we keep hearing it is “bad” but keep having it shoved in our faces everywhere on the internet?

How today’s current AI tools work

When we’re talking about “AI” today in the general public sense, we’re talking about LLMs or Large Language Models. LLMs are essentially giant statistical prediction machines, designed to give its best guess on what words are most likely to come after what you asked it. A quick Google search shows the claims that the more advanced models (ChatGPT-4, Gemini 3, Claude 4.6) are currently about 85-90% accurate, but a 2025 study by the European Broadcasting Union showed that 45% of all AI answers had at least one significant issue and 20% contained major accuracy issues, including hallucinated details and outdated information. So at the moment, I think it’s fair to conclude we don’t know the full truth about LLM accuracy.

LLMs are trained (read: constantly learning) on humongous amounts of data, generally coming from scraping the internet. It’s been shown in the past that Google’s Gemini references Reddit most frequently and ChatGPT references Google search most frequently, but I believe this info is out of date in 2026. Let’s ask ourselves a moment - when was the last time you were able to trust 100% of everything you read on the internet?? Additionally, LLMs are trained on licensed data sources (like digital libraries), code repositories, and also the feedback it’s getting from human interaction (literally the questions people are putting into the chat). Which leads me to: this concept of “data in is data out.

Back to my credentials really quickly, I spent the majority of my tech career trying to get marketers and paid media folks to understand that if they fed bad data into our SaaS platforms, they would get bad results. My peers and I would bang the drum on the mantra, “data in is data out,” year after year, hoping that we could influence giant corporations to better organize their data into usable, readable pieces of information to feed into our machine learning algorithms (or what has now become “AI”).

Let’s break this down for a second. If you are baking a cake, you require flour, sugar, milk, vanilla, etc. If you use rotten milk while you’re baking, the end result is a rotten, inedible cake. If you use all regular ingredients, you will likely end up with an average cake. If you use high quality organic, locally-grown ingredients, you may end up with the best cake you’ve ever eaten. (This analogy brought to you by someone who’s not a baker, me).

My soap box moment: if the people who are using ChatGPT and LLMs the most are the dumbest among us, asking it stupid questions or providing it with 8th grade reading level context, the LLMs are going to continue to get dumber and dumber, because the inputs are not high quality. To have a LLM get better and smarter over time, it needs the best and smartest data (read: questions, context, inputs) to get there! You can’t convince me that people using ChatGPT as their therapists, boyfriends, and friends is going to lead to the transformation of the human race!!

Data in is data out.

Now that we’re all on the same contextual page, let’s get into why we’re here today: “Why AI is Bad.”

Pros and Cons of AI: Spoiler, there are more cons, imo

  • Pro: Increased operational efficiency, or in regular people speak: make things, like synthesizing information, faster

  • Con: Generative AI and LLMs use up the world’s water supply (reportedly more water than the total water bottle industry in 2025) and electrical resources (AI will require almost twice the power needed by The Netherlands by the end of 2025, per a PhD-led study) at an alarming rate. You can find way more daunting facts here.

  • Pro: “Enhanced decision making”

  • Con: The results are maximum 90% accurate with the best trained models. Would your boss be comfortable with a 10-15% margin of error in your latest quarterly report? I think not. Don’t worry though, we created a cute little word for the errors, hallucinations, to reinforce the falsehood that this tool can think on its own.

  • Pro: Possible increases in equity (for example, making things more accessible with auto-generated captions)

  • Con: Biased, incomplete, or downright incorrect data is feeding said decision making. It has been shown repeatedly for years in research studies that AI is biased to favor white men. (Think about the folks who are building AI…) We’ll revisit this in a later part of this series.

  • Pro: Possibly identify cancer early or other medical advancements

  • Con: Unbridled access to a self-affirming chatbot has lead to “AI psychosis” and death. I can’t even begin to elaborate rn.

  • Pro: “Boosted economic growth”

  • Con: The US economy is currently being propped up by an AI bubble, which basically means the same 5 companies are all shuffling billions of dollars around to one another to prop up the stock market and give them more time to generate revenue from their billions in AI investments (spoiler: they’re not generating revenue).

  • Pro: “Improved worker productivity”

  • Con: We’re training AI agents to eventually take over our jobs, and companies are preemptively laying off thousands of workers in order to counteract the crazy-huge investments they’re making in AI (that again, is not generating the returns on investment they thought it would). As I’m writing this on March 11th, Atlassian is laying off 10% of it’s workforce to “push into AI and enterprise sales.”

  • Con: Kids in school (and people in general) aren’t using their brains to critically think anymore. A recent study about AI in schools has shown that the cons outweigh the pros significantly: children are stunted cognitively, emotionally and socially, but maybe sometimes might be able to learn to read and write better.

  • Con: The computational power needed to support the tech world’s lofty goals for AI requires massive numbers of data centers to be built - and they’re building them in the poorest, blackest and brownest neighborhoods in our country, leading to dirty air, increased electrical bills, and health issues for their neighbors.

  • Con: It’s prediction features can be used for autonomous drone strikes against people the AI thinks might be a terrorist…… Currently Anthropic is trying to prevent the US government from doing this, but like… we’ll see what happens. I can expand later if you’re interested 🫠

  • Con: Companies are hiring less, and laying off more. They are leaning away from hiring entry level employees due to AI automation. What’s going to happen when the current entry level people graduate to mid level and there’s no one to replace them? What about the critical foundational skills people gain by doing entry level work - are the corporate oligarchs thinking about the inevitable decline in quality of work because people don’t have foundational skills anymore?

Sigh. I could go on, and I will in future parts of this series. If you have any topics about AI in particular you’d like to see in a future essay, please let me know! I’d also love to hear if you’re a LLM/generative lover - maybe you can sway my opinion a little further towards center. For now, I’m goign to go with my gut and sharpened pattern recognition skills to stay in the “this ain’t it” camp when it comes to AI as it exists to genpop in 2026.

If you enjoyed this essay, please subscribe so you’ll be first to know about future updates. prettybusy is a platform for multi-passionate millennials who enjoy nuance and cultural commentary. I’ll send out a cultural digest, Culture over Coffee, on the weekends, and intersperse essays about topics spanning news, entertainment, race, tech, wellness, and whatever else affects us millennial women. I hope you join me on this journey!

Keep Reading