Almost Timely News: 🗞️ The Biggest Thing Missing at MAICON 2025 (2025-10-19)
1500 AI Nerds in Cleveland All Missed This Thing
Almost Timely News: 🗞️ The Biggest Thing Missing at MAICON 2025 (2025-10-19) :: View in Browser
The Big Plug
👉 Watch my MAICON 2025 session, From Text to Video in Seconds, a session on AI video generation!
Content Authenticity Statement
100% of this week’s newsletter was generated by me, the human. You will see bountiful AI outputs in the video, especially in the analysis. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future.
Watch This Newsletter On YouTube 📺
Click here for the video 📺 version of this newsletter on YouTube »
Click here for an MP3 audio 🎧 only version »
What’s On My Mind: The Biggest Thing Missing at MAICON 2025
What was missing at MAICON?
This week, I had the privilege and pleasure of speaking at the Marketing AI Conference (MAICON) in Cleveland, Ohio, alongside my CEO, business partner, and decade-long friend, Katie Robbert. On Tuesday, we led a workshop on using AI for analytics. On Wednesday, we stepped in for an impromptu session titled “From Text to Video in Seconds“.
This session was originally slated for Joshua Xu from HeyGen, but a last-minute commitment came up. When the conference organizers asked if we could fill the gap, we jumped at the chance. I looked at the session description and was fascinated - it would be a fun challenge. Content creation, video generation, and multimedia are not what Trust Insights or I are typically known for. We’re recognized for analysis, AI strategy, and a deep technical understanding of how AI systems work. I’m happy with that reputation; these are things people are willing to invest in, and the foundation of any good business is something people want to buy.
But it’s not all we do, and it’s not all we are. When this opportunity came up, it was a fantastic challenge to take the existing session description and put our own spin on it. We delivered the session, and it was so much fun. We illustrated how creativity combined with technical knowledge can maximize the outputs from generative AI systems, bringing Katie’s expertise in film making as well as data governance in along with my knowledge of the landscape.
Of course, that same night, Google debuted a new version of their Veo video model, Veo 3.1. It’s always amusing when you declare something the “latest and greatest,” and literally four hours later, something new appears.
However, as someone who knows the AI space well, what struck me most was what was not at MAICON 2025.
The 800-Pound Panda in the Room
There was no shortage of discussion about general AI. Speakers from Meta and Google were present. Conversations buzzed about ChatGPT, Google’s Gemini, Anthropic’s Claude, and various vendors integrating with these market leaders.
But there was a glaring omission from the conversations at MAICON, at the session level and at the attendee level. In private discussions, there was a tacit acknowledgment of this omission, citing a variety of complicated geopolitical, budgetary, reputational, and trust-related reasons.
So, what was the 800-pound panda that went undiscussed in the room and in the sessions?
China.
Chinese AI was conspicuously absent from all discussions. I don’t think I heard a single Chinese model mentioned other than the ones I brought up. Yet China is one of the world leaders in generative AI. Their labs are cranking out models at an insane pace, and these models are very, very good. Some are state-of-the-art, like DeepSeek V3.2. If you look at leaderboards like LMArena, you’ll notice intriguing model names that rarely surface in Western-focused conversations: GLM from Zhipu AI, Hunyuan from Tencent, or SeeDream from ByteDance, the parent company of TikTok.
So many high-quality, high-capability models score at the top of the leaderboards, yet at what I consider one of the best AI for business conferences, they weren’t even a whisper. It was as though everyone collectively agreed to only talk about OpenAI, Anthropic, Google, and Meta. To be clear, there was no vast conspiracy or malicious intent. It was simply a glaring absence. What’s the cause?
As with anything this big, there are multiple reasons.
Five Reasons Why
There isn’t one cause. It’s an interesting stew of multiple, overlapping causes why people forget that half of the AI models that lead the charts are Chinese.
1. Data Privacy Concerns
There is a perception, right or wrong, that using Chinese AI models is inherently more dangerous than using Western ones. There is some truth to this when it comes to data privacy — if you are using Chinese AI models hosted by Chinese companies within the People’s Republic of China. When you use those services, especially free ones, you are 100% handing over your data to a foreign entity. If you are subject to data security laws, you should not use models hosted within that country.
But that’s no different than using some Western AI, like Meta’s AI or xAI’s Grok. You have no expectation of privacy there, and they are just as likely to hand over data to the US government as a company like DeepSeek is to the Chinese government. American tech leaders are just as cozy with the American government as Chinese tech leaders are with the Chinese government. One of the cardinal rules of data privacy is that if you are not paying, you are the product.
All AI companies, everywhere on the planet, will hand over data to a government when lawfully requested. It’s in everyone’s terms of service - they have to.
So how do you protect your data?
The only way to protect highly sensitive data in a commercial setting is through zero data retention APIs, where the provider logs nothing. If the provider says we have zero data retention, that means when a lawful request for data comes in, the provider can be like, Well, we don’t have any. Sorry, we didn’t log any. That’s how most really good reputable VPN services work. They say, We don’t log anything, so there’s nothing we can hand over to whoever comes knocking.
The maximum level of protection, however, is running local AI models you download onto your own computer. Remember, there is no such thing as ”cloud computing” - it’s literally just someone else’s computer. If the data never leaves your control, it is as safe as the rest of your machine.
You CAN use Chinese models safely if they are running on infrastructure you trust. That, when presented to skeptics, then provokes the next topic.
2. The Reality of Censorship
Many folks believe Chinese models are censored to give inappropriate or incorrect answers. There is censorship in Chinese models, without question. Ask one about Tiananmen Square, and you’ll get a refusal or the official party line. You can’t ask Deepseek about it.
But there’s also censorship in Western models. Google’s Gemini is famous for refusing to even touch politics. Sam Altman got lambasted this past week for saying that ChatGPT would begin to allow adult users to generate spicy content. I have no personal objection to that as long as safety protections exist for users who are not legally competent adults. The reality is for Western models, there is a ton of censorship.
Chinese or not, almost every AI model has restrictions baked in, many of them for good reason. Every country has the right to regulate its AI as it’s used within its borders; there’s nothing inherently wrong with that. But recognize that every country tries to censor its AI in some way. To my knowledge, there is no country on the planet that has AI model makers that does not attempt to influence those model makers in some way.
I’d argue having access to models from outside your geography is a GOOD thing; the American federal government has been pressuring AI model makers to make models that are favorable to its point of view, such as ranking the current elected officials as the best ever for any given query. (Brookings Institute). Having a consortium of different models from different national origins can give you a more nuanced set of answers, in the same way that reading news sources outside your country of origin can give you a more nuanced view of events Even from within your country.
3. The Prolific Output of Models
China is absolutely prolific in churning out AI models. There are tens of thousands of high-performing models that are terrific, and it can be overwhelming to choose. They sometimes have obscure names, like Alibaba’s Qwen model, which often comes with names like Qwen3-235B-A30B. Clearly, no one in product marketing got their hands on that one. China has made it a national priority to develop as many state-of-the-art models as possible, and there are far more of them than in the US.
So one of the reasons why people might shy away from Chinese models for AI is simply because of the overwhelming volume of them.
4. Conflicting Business Models
This is the Jerry Maguire moment. Show me the money. Silicon Valley folks don’t like to talk about Chinese models because most Chinese companies give their models away for free. You can download them, and if you have the hardware, you can run them and pay no one anything except your electric company. This stands in stark contrast to the business model of Western AI companies, which is focused on getting users to pay for increasingly tiered services — from ChatGPT Plus to Enterprise, from Gemini Pro to Ultra.
Investors have poured ridiculous money into Western AI companies and need to see a return. AI has to turn big profits, and you can’t do that when your competitiors are giving away the shop.
Why is China doing this? China is playing the long game. At the direction of the government, their companies give away models to prevent global lock-in to any one country’s AI. It’s in their national interest not to be dependent on American AI companies, and it’s in their national interest to have others adopting their models.
If a developing nation has a choice between paying a lot of money to OpenAI or using Alibaba’s free models with almost as good results, it’s a no-brainer. The same is true for any company that has small budgets but wants the benefits of big AI. At major infrastructure providers, Chinese models are often available for a tiny fraction of the cost of Western models.
For example, Anthropic Claude Sonnet 4.5, an excellent model, costs $3-6 per million tokens of input, and $15-22 per million tokens of output. In a single coding session, you can easily rack up 100 million tokens of usage, leading to massive bills. The very comparable Zhipu GLM 4.6? $0.60 per million token input and $1.90 per million token output, a difference of 86-91% cheaper for comparable performance, and that’s if you’re running it on someone else’s hardware. If you’re running it on your own infrastructure? It’s the price of electricity and the computer you’re running it on.
But that is a huge price difference. Silicon Valley cannot make money at those prices. Investors will be very unhappy with companies offering those prices to users. And so a big part of why no one talks about Chinese AI is because no one wants to admit that state-of-the-art AI is available for literally pennies on the dollar.
5. The Uncomfortable Truth: Bias
The last elephant in the room is a toxic mix of racism and xenophobia. There is an implicit, and perhaps sometimes explicit, bias at play. When you look at the leaders in AI and the surnames on top AI papers, they are often not of Western white origin. Chinese-coded last names and affiliations with Chinese universities are very common.
By sheer population dynamics alone, China would produce four to five times as many PhDs as America, even with equal investment in education — which is not the case. America is investing far less in education, and investing far less in math and science programs to create a pipeline of the best AI experts for the years ahead. Compound that with differing national values placed on education and academic rigor, and you get wild disparities in who generates the most brainpower for building state-of-the-art AI. The reality is that China is producing some of the best AI because it is producing the best people skilled at math and science to power that AI, and has been for years.
According to the OECD, in 2022 in the Programme for International Student Advancement, Chinese students averaged scores of 552 for math, 543 for science, and 510 for reading. USA students averaged scores of 465, 499, and 504 respectively, placing the USA well below the OECD average across dozens of nations. not even remotely competitive with China or Taiwan or Singapore. The United States students dramatically underperform and they underperform even the average.
These basic realities manifest as a sense of racism and xenophobia, both at a governmental level and at individual and organizational levels. From politicians describing China as a nation of peasants (which hasn’t been the case for decades now) to Western people expressing discomfort even in pronouncing East Asian names, these attitudes mentally lock people out of evaluating Chinese AI models and technology - which in turn cuts them off from even thinking that these models could be great, affordable, high performance solutions for their AI challenges.
So What?
That’s a long explanation of why we didn’t see Chinese models at MAICON. But there’s not much you can do with that information alone, save to be aware that Chinese AI models exist and are peers of their Western counterparts at often dramatically lower costs. Let’s kick it up a notch and ask Katie’s favorite question: so what? What do we do with this information?
To do this, we need to tackle a three step process.
Know what’s available
Know your options for running it safely
Know how to use them
Step 1: What’s Available?
With hundreds of thousands of Chinese AI models to choose from, it’s difficult to figure out even which we should be using, if we want to use them.
The big Chinese AI labs, unsurprisingly, mirror their tech industry in the same way that the top Western labs mirror the Western tech industry. Digital giants Alibaba, Tencent, and ByteDance all have market-leading models because they have market-leading stores of data from users. The same way that companies like Meta and Google have world class models because they have world class amounts of data that they’ve collected from people, sometimes without their permission.
Let’s do a very quick tour of the Chinese AI models available to us.
For text generation - very much what you do with ChatGPT - you have an embarrassment of riches. If you have the hardware, or an infrastructure provider that offers it, DeepSeek’s models are competitive with ChatGPT. DeepSeek’s models are gigantic in size; you’ll almost certainly want to run them on Someone else’s servers.
If you want something you can run on a beefy laptop, Alibaba’s Qwen models are among the best. Qwen3-Next-30B-A3B-Instruct (say that 3 times fast) is an incredibly fast, smart model for day to day use. Qwen3 Coder is considered one of the best open models to use for local coding - it’s fast and smart.
For generating images, Tencent Hunyuan Image 3.0 and Bytedance SeeDream are chart toppers that generate realistic, great images and graphics. Hunyuan Image even exceeds Google’s Gemini image generation in terms of rankings on leaderboards. For editing images, Google’s Nano Banana tops the charts but SeeDream is in immediate second place.
For generating video, Alibaba’s WAN 2.2 currently leads the pack in terms of open weights, followed closely by Hunyuan Video. There are other Chinese models, from WAN 2.5 to Kling to Hailuo that score higher, but are not available as open weights models.
Step 2: Where to Run It
This is the most challenging part, and depends on several things - budget and speed being foremost. If you care about speed, renting a Chinese model on an infrastructure provider is probably your best choice, and you can choose from infrastructure providers in your region.
This matters because infrastructure providers in your region have to comply with your region’s data privacy laws and computing laws, et cetera. I use Groq (no relation to Elon Musk’s AI) and DeepInfra frequently; Groq has some of the fastest model serving, but has limited model support. DeepInfra is slower but has more variety in terms of models.
If you care about privacy, then running models on your own machine is your best bet. This is where things get messy - what you can run depends on what you have for hardware. I covered this at length earlier this year in my primer guide for getting started with open models, but the bottom line is this:
You have to know how much VIDEO memory your computer has available
Unless you’re using an M-series Mac, in which case you have to know how much total free memory your Mac has available
You have to know what level of accuracy you care about, as models come with different accuracy levels. The more accurate the model, the more memory it takes up
You need model hosting software such as LM Studio, AnythingLLM, llama.cpp, etc. to run the models because they’re little more than really big databases
If you care about cost, then do the math for what makes the most sense for you. Look at big infra providers and see what their token costs are for both input and output. Look at what your current hardware can support, and if it can’t support the kinds of models you want, then look at the cost to upgrade.
Step 3: Know How To Use Them
Once you’ve identified what’s available and where you want to run your models, the final step in the process is determining how you’ll use the models. What tasks are you looking to perform? What are those use cases? Especially for open weights models that tend to be smaller, of the seven major use case categories for generative AI, they perform most of them pretty well.
And as a reminder, those seven use case categories are extraction, classification, summarization, rewriting, synthesis, question answering, and generation.
Chances are, you’re going to want to use open weights models in one of two ways:
As a direct chat agent. This is akin to how you use ChatGPT, and that means you’ll want a chat-focused interface. LM Studio and AnythingLLM are excellent choices for this.
As an engine for other tools. This is akin to using an API provider like OpenAI’s platform, Google’s Vertex APIs, or other similar systems. LM Studio, llama.cpp, and Ollama are excellent choices for this.
Based on your use cases, you’ll choose one of these two routes. If you’re using a Mac, LM Studio is probably your best choice because it natively supports the Mac’s MLX model format, a format optimized for Macs.
In the video example for this week’s newsletter, I use Qwen3-Coder-30B-A3B as the example model along with the Cline coding tool. I also show you my favorite hack - local models that are small aren’t the smartest, but you can plan with a big model and implement with a small model, a principle from my new book, Almost Timeless. (Principle 27: plan big, act small)
Wrapping Up
MAICON 2025 was an amazing experience, to be clear. It’s one of my top favorite events to attend, and I strongly recommend attending it if you want to hang out with folks deeply invested in making AI work. The omission of all things China isn’t an indictment that the conference is in any way intentionally biased or ignorant - it’s an indictment of the entire Western AI field overall. It is an indictment of a very Western point of view in human beings in America and the West.
Putting your head in the sand and pretending that the world’s largest competitor in AI doesn’t exist and its technologies don’t exist is foolish. Making claims that the technology is inferior or non-functional is intellectually dishonest at best. As I said throughout, there are valid and legitimate concerns about using any software platform that operates out of the People’s Republic of China. But the models themselves are just as state of the art as their Western counterparts, and the complete AI practitioner should be familiar with all the world’s leading technologies.
How Was This Issue?
Rate this week’s newsletter issue with a single click/tap. Your feedback over time helps me figure out what content to create for you.
Here’s The Unsubscribe
It took me a while to find a convenient way to link it up, but here’s how to get to the unsubscribe.

If you don’t see anything, here’s the text link to copy and paste:
https://almosttimely.substack.com/action/disable_email
Share With a Friend or Colleague
If you enjoy this newsletter and want to share it with a friend/colleague, please do. Send this URL to your friend/colleague:
https://www.christopherspenn.com/newsletter
For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here.
Advertisement: The Unofficial LinkedIn Algorithm Guide
If you’re wondering whether the LinkedIn ‘algorithm’ has changed, the entire system has changed.
I refreshed the Trust Insights Unofficial LinkedIn Algorithm Guide with the latest technical papers, blog posts, and data from LinkedIn Engineering.
The big news is that not only has the system changed since our last version of the paper (back in May), it’s changed MASSIVELY. It behaves very differently now because there’s all new technology under the hood that’s very clever but focuses much more heavily on relevance than recency, courtesy of a custom-tuned LLM under the hood.
In the updated guide, you’ll learn what the system is, how it works, and most important, what you should do with your profile, content, and engagement to align with the technical aspects of the system, derived from LinkedIn’s own engineering content.
👉 Here’s where to get it, free of financial cost (but with a form fill)
ICYMI: In Case You Missed It
Here’s content from the last week in case things fell through the cracks:
Claude Sonnet 4.5’s Game-Changing Feature: Context Editing for Smarter AI Conversations
How to Build a Trusted Network in the Age of AI-Generated Content
How to Win in the AI Age: Mastering the Art of Asking Brilliant Questions
The Three Things That Determine Your Success: Why Work and Luck Aren’t Enough
In-Ear Insights: How to Make Conferences Worth the Investment
On The Tubes
Here’s what debuted on my YouTube channel this week:
Skill Up With Classes
These are just a few of the classes I have available over at the Trust Insights website that you can take.
Premium
Free
New! Never Think Alone: How AI Has Changed Marketing Forever (AMA 2025)
Powering Up Your LinkedIn Profile (For Job Hunters) 2023 Edition
Building the Data-Driven, AI-Powered Customer Journey for Retail and Ecommerce, 2024 Edition
The Marketing Singularity: How Generative AI Means the End of Marketing As We Knew It
Advertisement: New AI Book!
In Almost Timeless, generative AI expert Christopher Penn provides the definitive playbook. Drawing on 18 months of in-the-trenches work and insights from thousands of real-world questions, Penn distills the noise into 48 foundational principles—durable mental models that give you a more permanent, strategic understanding of this transformative technology.
In this book, you will learn to:
Master the Machine: Finally understand why AI acts like a “brilliant but forgetful intern” and turn its quirks into your greatest strength.
Deploy the Playbook: Move from theory to practice with frameworks for driving real, measurable business value with AI.
Secure Your Human Advantage: Discover why your creativity, judgment, and ethics are more valuable than ever—and how to leverage them to win.
Stop feeling overwhelmed. Start leading with confidence. By the time you finish Almost Timeless, you won’t just know what to do; you will understand why you are doing it. And in an age of constant change, that understanding is the only real competitive advantage.
👉 Order your copy of Almost Timeless: 48 Foundation Principles of Generative AI today!
Get Back to Work
Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you’re looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.
Director, Marketing & Corporate Communications at Spectrum Science
Product Marketing Director, Data Science/Ai/Ml at Domino Data Lab
Vice President Business Development And Marketing at Blue Signal Search
Advertisement: New AI Strategy Course
Almost every AI course is the same, conceptually. They show you how to prompt, how to set things up - the cooking equivalents of how to use a blender or how to cook a dish. These are foundation skills, and while they’re good and important, you know what’s missing from all of them? How to run a restaurant successfully. That’s the big miss. We’re so focused on the how that we completely lose sight of the why and the what.
This is why our new course, the AI-Ready Strategist, is different. It’s not a collection of prompting techniques or a set of recipes; it’s about why we do things with AI. AI strategy has nothing to do with prompting or the shiny object of the day — it has everything to do with extracting value from AI and avoiding preventable disasters. This course is for everyone in a decision-making capacity because it answers the questions almost every AI hype artist ignores: Why are you even considering AI in the first place? What will you do with it? If your AI strategy is the equivalent of obsessing over blenders while your steakhouse goes out of business, this is the course to get you back on course.
How to Stay in Touch
Let’s make sure we’re connected in the places it suits you best. Here’s where you can find different content:
My blog - daily videos, blog posts, and podcast episodes
My YouTube channel - daily videos, conference talks, and all things video
My company, Trust Insights - marketing analytics help
My podcast, Marketing over Coffee - weekly episodes of what’s worth noting in marketing
My second podcast, In-Ear Insights - the Trust Insights weekly podcast focused on data and analytics
On Bluesky - random personal stuff and chaos
On LinkedIn - daily videos and news
On Instagram - personal photos and travels
My free Slack discussion forum, Analytics for Marketers - open conversations about marketing and analytics
Listen to my theme song as a new single:
Advertisement: Ukraine 🇺🇦 Humanitarian Fund
The war to free Ukraine continues. If you’d like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia’s illegal invasion needs your ongoing support.
👉 Donate today to the Ukraine Humanitarian Relief Fund »
Events I’ll Be At
Here are the public events where I’m speaking and attending. Say hi if you’re at an event also:
MarketingProfs B2B Forum, Boston, November 2025
MASFAA, Southbridge, November 2025
Social Media Marketing World, Anaheim, April 2026
There are also private events that aren’t open to the public.
If you’re an event organizer, let me help your event shine. Visit my speaking page for more details.
Can’t be at an event? Stop by my private Slack group instead, Analytics for Marketers.
Required Disclosures
Events with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them.
Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them.
My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well.
Thank You
Thanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness.
See you next week,
Christopher S. Penn



