All articles

Comcast Advertising Insights Lead Says CTV Builds The Brand Memory AI Answer Engines Reward

Ad World News Desk
Published
May 7, 2026

Travis Flood, Executive Director of Insights at Comcast Advertising, on how generative tools are opening CTV to smaller advertisers chasing visibility in AI answer engines.

Credit: Ad World News

Answer engines are a new way for buyers to find you, but if you put all your eggs in that basket, you're at the mercy of what it kicks back.

Travis Flood

Executive Director of Insights

Travis Flood

Executive Director of Insights
Comcast Advertising

The brands that surface inside AI answer engine results are largely the ones buyers already recognize, which puts a fresh premium on top-of-funnel awareness. Connected TV is where that familiarity increasingly gets built. Generative ad tools are collapsing the cost and timeline of television-ready creative, opening CTV to smaller advertisers and feeding the brand memory LLMs draw on when they choose which names to surface.

Travis Flood, Executive Director of Insights at Comcast Advertising, has spent nearly two decades studying that kind of recall. He leads a team focused on how marketing drives business results and leans on long-term consumer behavior and hard data rather than industry buzz. Flood sees the rise of AI answer engines as fundamentally a brand-memory question, with CTV positioned as the channel best suited to plant the memories that travel through both human and machine recall.

"Answer engines are a new way for buyers to find you, but if you put all your eggs in that basket, you're at the mercy of what it kicks back," Flood says. His answer to that risk is to build awareness on the channels audiences already trust, with CTV taking on a bigger share of that work than before.

CTV becomes the new on-ramp

For growth-stage brands, the opportunity goes deeper than cheaper production. Performance social rewards a harvesting instinct, where the goal is to find an in-market buyer and close the sale. CTV asks for a different posture, where today's impression has to compound into a name someone types into a search bar weeks or months later. Flood frames that long view as the move smaller brands can finally afford.

"Before, brands were buying social, maybe they targeted someone very specific, and they cashed in. Almost like you're collecting the crops, going out there and catching the sales when they fall. But brands that really want to grow beyond that, they have to think about how they can push it from the beginning," Flood says.

That earlier push is what plants the associations a buyer eventually brings to an LLM. Living-room reach builds the kind of broad familiarity that makes a brand surface unprompted, both in human memory and in machine-generated summaries. For advertisers used to being discovered by category, the goal becomes getting typed by name.

Pulling that off at scale depends on AI-assisted creative becoming standard practice, which Flood treats as a logical evolution of production methods advertisers already use. He compares the approach to traditional stock footage, where commercials have long stitched together pre-made or synthesized elements without losing the audience's trust.

The limits of optimizing for bots

In his influential 2010 book How Brands Grow, marketing science professor Byron Sharp makes the case that advertising's job is to build mental availability, the memory structures that bring a product to mind at the moment of purchase. Major brands accumulate dozens of entry points over time, including specific products, slogans, use cases, and customer types, all linking a name back to a category in countless small ways. Flood describes those entry points as nodes in a network, and he suspects LLMs may weigh those connections in ways that loosely mirror human memory, favoring brands with denser, broader cultural footprints. It is an open question, but if the parallel holds, the same web of associations that wins in a buyer's head increasingly shapes which brands an LLM puts on the list.

The math is straightforward. Brands that make a buyer's short consideration list have real odds at the sale; brands that miss it have none. That arithmetic is what keeps brand-building dollars relevant in an era of cheap performance pixels. Companies that fail to plant memory structures early get filtered out before the answer engine ever finishes generating its summary. It also explains why CTV's expanded accessibility matters beyond a single campaign metric. The channel builds the kind of broad, repeated exposure that translates into both human recall and the cultural density LLMs appear to reward.

The temptation is to chase the algorithm directly. A budding Answer Engine Optimization (AEO) industry now offers ways to format websites and content to rank inside large language model outputs. Flood is wary of treating that as a primary strategy. Platforms have a long history of cracking down on over-engineered content tactics, and he expects the same pattern once enough sites lean on the same playbook. He also sees a more immediate downside. "You have no control for the most part, other than maybe throwing a bunch of content at it. I don't think that's the right approach," Flood adds.

The deeper tension is one that marketing teams across the industry are actively working through: "Are we marketing to the answer engine, or are we marketing to people? Should we make our website very focused for answer engines, even if it makes it less pliable for customers?" Flood asks.

There is no settled answer yet. Sites optimized for machine parsing tend to be more structured, more redundant, and less suited to a human reading flow. Flood's instinct is that the more durable strategy treats answer engines as one input rather than the primary audience. Human visitors still convert and still influence which brands appear in AI-generated lists at all, since LLMs draw on the cultural recognition that gets built off-page, in channels like CTV.

Fundamentals still hold

Marketers are operating against a backdrop of changing consumer habits, including new ways people use answer engines and shifting traditional TV versus streaming demographics. The through-line is that the basic psychology of how people form consideration sets stays relatively steady. Answer engines, streaming, AI-generated creative, and traditional TV all have roles to play, and letting any single channel dominate the plan simply because it is easy to measure or currently in the spotlight is a familiar mistake.

Pairing newer attribution methods with established disciplines like incrementality testing and mix modeling is one way to keep that perspective. The same logic applies to how marketers put LLMs to work. The most useful applications sit on the strategy side, where brands can upload aggregated, anonymized customer data and probe the model for blind spots, under-served segments, or competitive gaps, getting the kind of read that previously required an outside engagement.

"The same way you don't put all your eggs in one basket for answer engines, you don't put all your eggs in the basket of streaming, because half the people are still watching in a traditional environment. Follow the fundamentals and try to understand it as best you can," Flood concludes.