
As AI produces near-identical outputs across competing brands, teams that accept those outputs without question risk losing what makes their brand distinct.
Patricia Tanabe, Head of Marketing & Operations at Arbos Digital, argues that AI cannot account for real-world business context, like a supply disruption that explains a sudden sales dip, where the data shows the trend but not the cause.
The gap between marketers who use AI as an accelerator and those who use it as a crutch is widening, and the next generation of hires raised on instant answers may be the most exposed.
The problem with AI getting very good at marketing is that it's very good at marketing for everyone. The same tools, the same models, the same prompts, and increasingly, the same outputs served up across competing brands without any understanding of what makes one different from another. Efficiency at scale is real, but standardizing your way into irrelevance is a genuine risk that comes with it.
Patricia Tanabe has seen it up close. As Head of Marketing & Operations at Arbos Digital, she works with 7-figure retail and eCommerce brands where data is rarely the missing ingredient. In her experience, it's judgement and nuance that separate human-led work from bland AI-generated campaigns.
"AI can give you all the data and even the insights, but it doesn't know what's actually happening inside that business," she says, "The nuance, the story, the why behind the numbers still comes from the marketer." The tools can flag a sales dip. They can't tell you it was caused by a cargo delay halfway around the world. The fix is deeper knowledge of the work itself.
Brains beat prompts: "Know the entire process so you can automate and fix it if needed. People are forgetting the process, and we're going to have a cleanup. Those who know the process won't be affected by it. They'll use the tool, not depend on it," Tanabe emphasizes. Marketers who understand platform mechanics are the ones capable of turning AI outputs into meaningful strategy.
Insight vs. output: AI delivers immediate value in execution, particularly reporting. "You save three or four hours on each report by skipping data cleanup and manual work. You're just reading and building slides. Even with live dashboards, clients still want to get a PDF or presentation. For that, AI is great," Tanabe explains. But faster reports don’t guarantee better ones, especially when it comes to interpretation. "AI knows the metrics for Google Ads or programmatic media, but reach means something different for Brand A than for Brand B. AI gives the same answer, but the marketer knows what it means in the specific brand's landscape." When everything gets reduced to the same answer, meaning flattens. You can see that same effect in writing.
Writers vs. the machine: "It's sad how people view em-dashes now. I used one in an email, and someone asked why I was using AI. I pointed out a grammar mistake and typo to prove it wasn't. It kind of broke the em-dash. Writers are trying to find ways around it because otherwise, anything they write, people assume it is AI," she notes. Tools can generate and capture content, but they don’t replace the writer. "For example: Fireflies takes notes so you can talk instead of writing. But who writes the article? Not Fireflies. You'll use that information to write it because you have that writer background." The same dynamic extends beyond writing into creative strategy.
When a KitKat shipment was stolen and the brand acknowledged it publicly, the moment took off online. As Tanabe describes it, "KFC, Domino’s, and lots of other brands" jumped in, a real-time response that came from people, not AI. It relied on timing and instinct, something AI can’t replicate. That’s what separates strong decision-making from automation, and there’s a scaling risk that not everyone develops it.
No filter thinking: Next-generation marketers, raised on mobile and instant answers, are more likely to accept AI outputs at face value. "When Google started in the late nineties, our parents told us not to trust everything online. Now we're telling younger generations the same thing. We grew up in that transition from having no internet to suddenly downloading pirated songs, and developed the critical thinking to question things. Without it, people just accept what they see," she says. That skepticism needs to be built into the workplace, not just learned through experience.
The power of why: "Just like you have cybersecurity training when you join a company, there should be training for critical thinking. People need to think on their own again. Not everything online is absolute truth, and relying on these tools can be dangerous long-term," Tanabe highlights. Applying that in real environments doesn't always land smoothly. "People will try to shut you down. The more you ask, the more annoying you're perceived, especially when questioning processes. You become the annoying kid at work. Navigating that is important, because it's where people will stand out, break patterns, and push change." That's what separates strong practitioners from the rest.
The marketers most at risk aren't the ones ignoring AI. They're the ones trusting it completely. When every team at every brand runs the same tools and accepts the same outputs, the work starts to look the same too. Reach means something different for Brand A than it does for Brand B. A stolen KitKat shipment is a PR crisis for most brands and a cultural moment for the ones whose people were paying attention. AI has no way of knowing the difference. Tanabe concludes, "It’s not about what it does for you, but how you work with it."