Thank you for the explication! This is super helpful. To be fair, there is sufficient evidence that people *are* just typing in what would be a normal google search into ChatGPT. But people also used to approach my spouse when he worked at Trader Joe's and just command "bread," no context, like he was a search engine.
We searched with keywords for 20 years. Old habits die hard.
I will also add here what I add everywhere, which is, that while we can have no clue about the exact words people are typing, we can certainly make a very good guess by understanding how people have searched in the past and knowing why people buy. We are not inventing new vocabulary words or purchase criteria in most verticals. We are just learning how to advertise in a new distribution system. We're going back to the Wanamaker problem, as we always will.
Semantically, I agree with you. But probabilistically, even a small change in wording causes MASSIVE deflection. Hell, even in the tests I ran for the newsletter, three different runs with the SAME prompt came up with wildly different answers. I wish people understood better that something inherently probabilistic is ALWAYS going to deliver different results. The snake oil salesmen are peddling theoretically deterministic outputs ("know how your brand ranks in ChatGPT!") from probabilistic engines.
Oh yeah, when you think about the variations in vocabulary, the math goes wild, and that stresses marketers out because we are all very used to our "funnel" mental and financial models. It's also frustrating because even when AI tools see our content, whether or not they cite it is a roll of the dice. So thanks for proving that these tools are vaporwareβI'm definitely advising against investing in this type of measurement.
I wish "AI visibility" measured how well your website can be read by machines, rather than whether they choose to cite it.
For most brands, addressing user fundamentals directly is a good first step. We get so caught up in "everything is so new!" that we forget to address the not-so-new. Can a human figure out from the information available online what your product does and who it is for? Are those words clearly linked, in text, to your brand name? Is your brand name present in your website copy beyond the top-left logo graphic? (this one trips up wayyyy too many businesses.) Do featured testimonials name your brand specifically, or are they generic "these guys are great!"? Do videos have clear narration and transcripts?
Does this information use the concrete nouns and verbs you already have in your back pocket from customer research? Can crawlers access the content that describes what the product does and how much it costs? (And can you measure the impact of adding this information to your website and improving the user experience?)
Product fundamentals have been ignored in the age of "demand generation" and "ranking." Rather than "Are they talking about me?" we have to focus on "does everyone understand what we are saying?"
My two cents is to get closer to your entityβdefining your knowns, directly addressing pain pointsβrather than spiraling out until we figure out how to measure the unknowns (if we figure them out at all).
Great article--thank you for the shoutout! In regards to the wording discussion in the comments, it's interesting that the latter version of ChatGPT corrects the prompt's misspelling/compounding of quarterhorse to our correct company name of Quarter Horse.
In both examples, I'm glad the AI recognizes both the compound word and the proper noun name of the racehorse we are named for, as they are often confused and semantically interchangeable. I prompted Gemini with the same question, and it also answered not with the incorrect prompt name, but with the proper name of the company (and the horse). Just like with how search guesses at what you meant to query based on the next nearest word (or most popular similarly written search).
All this to say, I won't look the gift horse in the mouth any further, and am glad to know we're recognizable in each instance!
Thank you for the explication! This is super helpful. To be fair, there is sufficient evidence that people *are* just typing in what would be a normal google search into ChatGPT. But people also used to approach my spouse when he worked at Trader Joe's and just command "bread," no context, like he was a search engine.
We searched with keywords for 20 years. Old habits die hard.
I will also add here what I add everywhere, which is, that while we can have no clue about the exact words people are typing, we can certainly make a very good guess by understanding how people have searched in the past and knowing why people buy. We are not inventing new vocabulary words or purchase criteria in most verticals. We are just learning how to advertise in a new distribution system. We're going back to the Wanamaker problem, as we always will.
Semantically, I agree with you. But probabilistically, even a small change in wording causes MASSIVE deflection. Hell, even in the tests I ran for the newsletter, three different runs with the SAME prompt came up with wildly different answers. I wish people understood better that something inherently probabilistic is ALWAYS going to deliver different results. The snake oil salesmen are peddling theoretically deterministic outputs ("know how your brand ranks in ChatGPT!") from probabilistic engines.
Oh yeah, when you think about the variations in vocabulary, the math goes wild, and that stresses marketers out because we are all very used to our "funnel" mental and financial models. It's also frustrating because even when AI tools see our content, whether or not they cite it is a roll of the dice. So thanks for proving that these tools are vaporwareβI'm definitely advising against investing in this type of measurement.
I wish "AI visibility" measured how well your website can be read by machines, rather than whether they choose to cite it.
For most brands, addressing user fundamentals directly is a good first step. We get so caught up in "everything is so new!" that we forget to address the not-so-new. Can a human figure out from the information available online what your product does and who it is for? Are those words clearly linked, in text, to your brand name? Is your brand name present in your website copy beyond the top-left logo graphic? (this one trips up wayyyy too many businesses.) Do featured testimonials name your brand specifically, or are they generic "these guys are great!"? Do videos have clear narration and transcripts?
Does this information use the concrete nouns and verbs you already have in your back pocket from customer research? Can crawlers access the content that describes what the product does and how much it costs? (And can you measure the impact of adding this information to your website and improving the user experience?)
Product fundamentals have been ignored in the age of "demand generation" and "ranking." Rather than "Are they talking about me?" we have to focus on "does everyone understand what we are saying?"
My two cents is to get closer to your entityβdefining your knowns, directly addressing pain pointsβrather than spiraling out until we figure out how to measure the unknowns (if we figure them out at all).
That would be a GREAT use case for a VLM. Here's my site: are you even able to read it?
Great article--thank you for the shoutout! In regards to the wording discussion in the comments, it's interesting that the latter version of ChatGPT corrects the prompt's misspelling/compounding of quarterhorse to our correct company name of Quarter Horse.
In both examples, I'm glad the AI recognizes both the compound word and the proper noun name of the racehorse we are named for, as they are often confused and semantically interchangeable. I prompted Gemini with the same question, and it also answered not with the incorrect prompt name, but with the proper name of the company (and the horse). Just like with how search guesses at what you meant to query based on the next nearest word (or most popular similarly written search).
All this to say, I won't look the gift horse in the mouth any further, and am glad to know we're recognizable in each instance!
Apparently I, the author, am equine-illiterate.