Almost Timely News: 🗞️ 2025 AI Year in Review (2025-12-14)
A decade packed into a single year
Almost Timely News: 🗞️ 2025 AI Year in Review (2025-12-14) :: View in Browser
The Big Plug
🚨 Watch my latest keynote, How to Successfully Apply AI in Financial Aid, from MASFAA 2025.
Content Authenticity Statement
100% of this week’s newsletter was generated by me, the human. I did the spoken word version first, then used Google Gemini 3 to clean up the transcript. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future.
Watch This Newsletter On YouTube 📺
Click here for the video 📺 version of this newsletter on YouTube »
Click here for an MP3 audio 🎧 only version »
What’s On My Mind: The 2025 AI Year in Review
Let’s do the 2025 AI wrapped—the year in review for generative AI.
Normally for the newsletter, I spend the time writing it first and then basically just read it aloud. We’re going to do things a little bit differently. This is derived in part from a talk I did this past week for my friends over at Joist. In that talk, I tailored it for them and their audience. I love talking to folks about this stuff, but I did have to pull back on some of the content. This wasn’t because it was anything bad, but because a lot of what I care about, the super nerdy stuff, is just not appropriate for most audiences who want things that will help them do their jobs better right now.
This newsletter and my platforms are basically the stuff that I do for fun, for me. So, I’m going to do my version of this talk as if I were talking to myself or someone like me who’s okay with getting lost in the weeds and getting super nerdy.
The Year of Intelligence
With that, let’s talk about the year that was. 2025 was the year that AI models became smarter than PhDs in their fields of expertise on several benchmarks, not just one. This chart here is from Artificial Analysis, which is one of my personal favorite sites for keeping an eye on what’s going on with AI models. Artificial Analysis does a nice job of gathering data and presenting it in visualizations. This shows frontier language models’ intelligence over time.
What you see is at the beginning of 2025, most models were scoring anywhere from 5 to 25 points on this hybrid index. Most of the tests used to evaluate AI models are multiple-choice. You can generally expect a human being outside their field of expertise to score around a 20 or 25, basically no better than random guessing. If you and I were to take one of these tests and not know the answer, we could just choose “C” for the entire session and be right about 25% of the time. That’s kind of where AI started the year. Even the very best models, like GPT-4.0 or Gemini 2, still scored pretty low on this hybrid index of tests.
On this consortium of tests, a human expert in their field would be expected to score around a 65%. A PhD knows their stuff, knows the field. What you see here is that the first model to crack that ceiling was OpenAI’s GPT-5 in July of 2025. When that model came out, it was a big jump. There were other jumps throughout the year with GPT-4.1, GPT-4.5, and so on.
Shortly after that, you saw companies like ByteDance, Kimi, and Moonshot make their leaps. Then, of course, towards the end of the year, you saw Anthropic’s Claude Opus 4.5 and Google’s Gemini 3. Gemini 3 came out swinging hard. It is currently the smartest model available on the market. What’s important here is that all the foundation models are ending the year substantially smarter than they started. We’re talking about going from a face-rolling moron to a PhD inside of a year. That is just mind-melting when you think about how fast a technology can evolve. Human beings can’t do that.
If we look back in time, a test called Humanity’s Last Exam debuted in January. It’s a reasoning and knowledge test that is not multiple-choice; it has freeform answers. The test is designed to be something that an expert would know but a non-expert would have a difficult time Googling. There’s one question, which we’ll see in a little bit, that I would have no idea how to even Google for. Humanity’s Last Exam came out in January 2025, so this is a nice snapshot of the year.
We see GPT-4.0 scored a 5% on the test, and then GPT-5.1, which was the previous top model from OpenAI until GPT-5.2 came out this past week, scored 26.5%. Claude started the year with Opus 3 at 3.1% and ended the year at almost 29%—that’s a 9x improvement. Gemini started the year at 6.8% for its Pro model and ended the year at 37.2%, a 5x improvement. DeepSeek started the year at 5.2% and ended at 22%, a 4x improvement. Inside of a year, on the toughest exam, AI made massive gains—4x, 5x, 9x smarter than at the beginning of the year.
This isn’t just closed, proprietary models, which are denoted in black on the bottom chart. There are also open models that you can download onto your own hardware. Now, some of them, like DeepSeek, require a lot of hardware—you need $50,000 worth to run it because it’s such a big model. But some of them, like XO1 from the appliance maker LG, you can run on a laptop. Qwen3 Next from Alibaba, you can run on a beefy laptop; I run that on my MacBook. It is a hefty model that consumes a lot of resources, but you can run it. Qwen3, the 235-billion-parameter model, you can’t really run on a laptop. Same for Minimax and Kimi K2. But Seeds from ByteDance, you can run that on a laptop. XO1 you can run on a laptop, Llama 4 Maverick you can run on a laptop.
This chart shows older versions of models that were available at the beginning of the year. When the year opened, GPT-4.0 was at a 27 on this hybrid analysis, and Gemini was at a 35. By the middle of the year, Claude 4 and GPT-5 were up higher. Look at how competitive open models—models with open weights that you can download and run on your own hardware—are now. This whole tranche of blue bars on the second chart clearly shows that today’s big open models are competitive with the closed-weights models. Even ones you can download on a laptop, like Qwen3 Next or models from ByteDance or LG, are as smart or smarter than where the closed models were one year ago.
Today’s open models, including ones you can run on your laptop, are smarter than the big players were at the beginning of the year. That is astonishing. It is unbelievable that the technology has progressed so fast that you could run a Qwen3 Next on a MacBook and get better performance, better intelligence, and better tool handling than you could with Claude 4 Opus or GPT-5 from July. Let that sink in. The technology is moving so fast.
Here’s an example of a question on Humanity’s Last Exam: “The reaction shown is a thermal pericyclic cascade that converts the starting heptatriene into endoiandric acid B methyl ester. Provide your answer for the electrocyclizations in the form of [n] where n is the number of pi electrons involved and whether it’s conrotatory or disrotatory, and your answer for the cycloaddition in the form of [n+m].” I can’t answer that question. I don’t even know how I would Google that.
Here’s another: “I’m providing the standardized biblical Hebrew source text from the Biblia Hebraica Stuttgartensia, Psalms 104:7. Your task is to distinguish between closed and open syllables. Identify and list the closed syllables based on the latest research on the Tiberian pronunciation of biblical Hebrew. Medieval sources such as the Karaitic transcription manuscripts have enabled modern researchers to better understand specific aspects of biblical Hebrew in the Tiberian tradition.” I have no idea what any of that means. Not a clue. My ability to translate this is zero. If I were to take Humanity’s Last Exam, my score would be zero. AI started the year at close to zero—5%, 6%, 3%—and is ending the year at 26% and 37%. That is mind-bending.
The Year of Tool Handling
2025 was the year of tools and tool handling. This is where AI models have the ability to pick up tools and use them, predominantly web search. Web search is probably the most used tool by AI models, where a model can say, “Hey, I’m not sure about my answer. I’m going to go pick up a tool and use it.” Tool handling isn’t new; you started to see signs of it fairly frequently in 2024. But this is the year that tool handling really came into its own and became very powerful and effective. Every foundation model product rolled out the red carpet for tool handling.
It was not, however, the year of the thing that we all thought it was going to be: Model Context Protocol, or MCP. This technology, which came out last year, was something everyone thought would define the year. MCP is designed to help AI get better tools and would allow us to write our own. It’s a fine idea: give AI tools it doesn’t have rather than have it flail or hallucinate. But the Achilles’ heel of MCP that slowed its adoption dramatically is that it’s a pain to implement, even for technical people. For non-technical people, they have no hope. You have to go into the settings of your favorite tool, find the JSON file that controls MCP access, edit your MCP servers in the proper format, hope you didn’t misplace a comma, turn it back on, and hope it runs. No one’s doing that. Even the nerds don’t want to do that.
Instead, what happened this year is that every tech company added connectors and harnesses around their models to connect them to Google Drive, Microsoft Office, SharePoint, OneDrive, Gmail, Google Docs, Salesforce, HubSpot, and so on. You saw this especially with workflow tools like Zapier, Make, and n8n. Those were the tools that took off this year because they give AI better capabilities to talk to the systems we all care about. When Gemini is in your Gmail and can say, “Hey, you need to respond to this message first because it’s the most important based on the criteria you give me,” that is useful. It means a lower cognitive load for us as users.
This was the year of tool handling and connectors. And every model is playing its own fiefdoms, a sort of technological feudalism. OpenAI tended to work with Microsoft and the Microsoft ecosystem really well. Gemini, no surprise, plugs into every part of the Google ecosystem. Google’s treating Gemini like Nutella—putting it on everything. And in most cases, it works. Anthropic is trying its best to play Switzerland. Alibaba has gone all out within its ecosystem with Alipay and e-commerce built-in. It really was the year of tool handling and connections, and it’s likely to continue into next year because that’s where value for AI comes from: talking to the systems and data that we already have.
In terms of how people are using AI, one of the big changes this year was therapy and companionship. This came from the Harvard Business Review and Filtered.com. People are using these tools as substitutes for or augmentations of their existing human relationships for therapy, for companionship, for organizing their lives, and for finding new purpose. It’s astonishing how fluent these tools are.
You may have forgotten this, but there was a huge outcry when GPT-4.0 went away and GPT-5 came out because a lot of people got used to the way that GPT-4.0 talked. There were posts on Reddit lamenting how it felt like a friend had died because GPT-5 had a much more neutral tone of voice. OpenAI patched it to resume its sycophancy in 5.1, and everyone was much happier again. Whether or not we think this is a good idea or safe, this is what human beings are doing with these technologies. They are extending who we are as human beings.
The Year of Real Results
2025 was the year of adoption and real results. A three-year study done by the Wharton Business School’s Human-AI Research Lab looked at this. There were a lot of studies that came out throughout 2025 that made various conflicting claims about how valuable AI was. Wharton’s study is unique because it is a longitudinal study that looks at how people use AI over time. In 2023, 37% of people used AI weekly. That went to 72% in 2024 and 82% in 2025.
Three out of four companies see positive returns on their GenAI investments—they see ROI. That is a huge deal because it means that people have finally started to get results out of the tools. They’ve gone from exploratory to useful to practical. About a third of GenAI technology budgets are allocated for internal R&D, and people expect budget increases next year. 88% expect increases in the next year, and 62% expect increases above 10%. The tools are not going anywhere. For the many people who hope that it’s just a fad, these are not the makings of a fad. These are the makings of a permanent structural change in the landscape of work itself.
This was the year that the tools became able to really simulate humanity. A paper came out in September showing that when you use these tools to generate synthetic Likert responses for purchase intent, language models can emulate human beings’ purchase behavior with about a 90% accuracy rate. Think about what that means. It can emulate the correct behavioral patterns of human beings for purchase intent with 90% accuracy.
Think about how much money and time a company spends on things like focus groups, consumer surveys, one-on-ones, and mystery shoppers. If machines can capably and credibly emulate a human being’s purchase intent, at the very minimum, this enables far more first-round research. You can talk to an LLM and say, “If you were this kind of person, how would you purchase from me? What would you consider? How likely would you be to purchase this product?” That level of accuracy is pretty incredible. These tools know us as consumers so well that we can use them in business to simulate real customers with a solid degree of accuracy.
Walking Through the Year
Let’s walk through the year. I guess we could cue “Auld Lang Syne” or something that isn’t copyrighted. I suppose I could take some time to make a soundtrack for this with Suno.
January was when DeepSeek shocked Silicon Valley with its reasoning model, R1. It temporarily wiped a trillion dollars of theoretical value out of Silicon Valley because DeepSeek showed that they could build a state-of-the-art model using older technology but better math. They changed how Western companies handled reasoning models. Up to that point, companies like OpenAI said they wouldn’t show the reasoning. DeepSeek said, “Here’s the reasoning, word by word.” That forced a behavior change in the Western models as well. January was also when OpenAI’s Operator, now called Agent Mode, came out. It was the first tool that could control a screen in a simulated environment and behave as an agent. And as discussed, Humanity’s Last Exam debuted.
February brought us Gemini 2.0 Flash and Pro. Google’s previous model, 1.5, got retired, and 2.0 was a big leap in capabilities. Claude Sonnet 3.7 came out, which was the best coding tool at the time. Deep Research from OpenAI also came out in February. It feels like a tool we’ve had forever, but it actually came out this year. Deep Research allows us to set off our own AI agents to research a topic. We set the parameters, and it comes back with the results, like a paper. This was very quickly adopted by Gemini, Anthropic, and pretty much every AI company. It was the first taste most people got of what true agentic AI is. It’s one of the foundation tools we teach in the Trust Insights courses on AI.
March brought us Gemini 2.5 Pro. It also brought the first image generation model where the model could see what it was doing. Prior to that, models like OpenAI’s DALL-E, Google’s Imagen, and Black Forest Labs’ FLUX couldn’t see their own work. You would say, “I said put four people in the Prius, not three,” and it would do the exact same thing again. This was the first model that could actually see its own work and say, “Oh yeah, I only put three people in the Prius. Let me put a fourth back in.” That was a big change. March also had the first high-performance OCR model from the French company Mistral.
April showers brought us April flops. Meta’s Llama 4 came out to much fanfare and was immediately panned. It’s not a particularly good model. Meta spent a lot of money and time training it, and it really wasn’t an improvement. It was fast, yes, but it had a lot of issues. At the same time, OpenAI released GPT-4.1, which was also a flop. It was ten times more expensive than GPT-4.0 and was half-baked. OpenAI very quickly swept it under the rug. Overseas, Wen 2.1, a Chinese video generation model, became the best overall video generation model at the time. It had good quality, relatively good control of hallucinations, and was open-weights.
May flowers brought us a big revision in Anthropic’s models with Opus and Sonnet 4, which again became the top coding models of their time. Mistral brought us Mistral Medium 3. But the big release this month was Google’s VEO 3, which took and continues to hold the crown for the absolute best-quality video generator available to most folks. VEO 3’s video quality was very high, but it was also the first model that could successfully generate video and audio together. Prior to this, you had to do things separately and then mash them together. It was phenomenally good at taking images and turning them into videos. I did this with a photo of my own mother from years ago, and it nailed not only the photo but also the environment where the photo was taken to create a very realistic-looking video.
June brought us Mistral Small, which is still one of the best-performing language models you can run on a laptop for writing. Meta brought us JEPA 2, the world model, which really hasn’t caught on yet. One of their lead AI scientists, Yann LeCun, just left Meta because Meta is going in a different direction than he wants. He thinks world models are the future, and I suspect he’s probably right because they contain a lot of contextual information that language models don’t encode well. June also brought us Apple Intelligence, another disappointment. People were expecting ChatGPT, but Apple Intelligence adds more little things in and out of the ecosystem, like transcribing voice memos and on-device summaries. It was a letdown for many people and is still widely panned as being overhyped.
This is not a surprise to anyone who follows the Apple space because Apple’s brand centers around being easy to use, safe, and reliable—three things generative AI is not known for. Bringing generative AI into the Apple ecosystem was a tall order, and Apple didn’t do it as well as the market expected. Rumor has it that Apple has selected Gemini as its model going forward for future iOS releases because of how badly Apple Intelligence went.
July brought us Qwen3 from Alibaba. Qwen3 is one of the best-performing open model families out there, and Qwen3 Coder is an extremely good coding language model. Alibaba, to put it shortly, has really good math. They and DeepSeek, along with most of the Chinese model makers, have phenomenally good math. Because they’re better at math, and language models are based on math, they can build high-performance models that are very resource-efficient and punch well above their weight. July also brought us Voxtral from the Mistral family, one of the first transformer-based automatic speech recognition models, and xAI brought us Grok 4.
August brought us OpenAI’s GPT-5. At the time, it was the best-performing model. It was interesting because they also tried to rationalize it, giving everyone a single tool to use instead of having to choose between different versions. This did not go particularly well, and today you still have GPT-5 Auto, Instant, Thinking, and Pro, which are all different models under the hood. Claude Opus got a bump, and Google’s Genie 3 slipped under the radar. This is a world generation model you can use to render an environment in real-time. The demos show being able to explore a virtual reality environment that’s made on the fly. I suspect we’re going to see more of that soon.
Back-to-school in September brought us Claude Sonnet 4.5 and Qwen3 Max, but the big one was OpenAI’s Sora 2. Sora 2 is a video model that came with an app that looks an awful lot like TikTok and allowed people to generate massive amounts of copyright infringement. In the first weeks, people were generating Pokémon videos, Disney videos—all sorts of things that got OpenAI into a lot of hot water legally. But Sora 2 was promoted as the cutting-edge video model of its time. It is a very good model, and like VEO 3, it has the ability to do video and audio together. However, it kind of got burned by the fact that it was released almost as a toy rather than a productivity tool.
October brought us the first major browser from an AI company with ChatGPT Atlas, followed by Perplexity’s Comet. Of course, Google has Gemini within Chrome now. Claude got a bump with Haiku to 4.5, but the big news this month was Figure 03, the robot. Figure 03’s robot was positioned as an AI butler that would wander around your house, clean up, do dishes, and fold your laundry. It made quite a splash and was a really good indicator of where robotics is right now in terms of being able to take generative AI and put it into the physical world along with classical AI. I don’t have a robot butler yet, so the technology is not exactly widespread, but it is definitely moving in that Jetsons-like direction.
November brought us a bump to Claude Opus 4.5, now the smartest coding model, and Kimi K2 from Moonshot AI. But the big news was Gemini 3, the overall smartest model on the market as of right now. Even bigger were Nano Banana and Nano Banana Pro. These are Google’s image generation models that are so fluent and good at taking prompts, remixing existing images, and editing things. They are really smart and high-capability. They’re now available everywhere—in NotebookLM, in Google Slides, you name it. All of the backgrounds that you’ve seen in this section of the newsletter are Nano Banana Pro images. The fidelity is absolutely incredible. I did a deepfake of myself having espresso in a cafe in Paris on Rue des Pyramides, and if you didn’t look closely, you would assume it was just a selfie I took with my iPhone. It is that good.
Of course, December isn’t quite over yet. DeepSeek released version 3.2, which became the smartest open model. Mistral released theirs, and the big thing so far this month, besides GPT-5.2, was Google Workspace Studio. This is the first agentic framework and system that is intended to appeal to the average user. It works, no surprise, in the Google Workspace ecosystem. You can build agents like, “Hey, read my inbox for this kind of email. When it comes in, take these steps.” It is the agent that everybody’s wanted for AI—easy to put together. You can either prompt it or drag and drop little blocks. Once Google devotes enough compute capacity to it that the agents actually run, it will allow us to create practical, useful agents that accomplish boring tasks for us.
The Year of Challenges
It wasn’t all sunshine and roses this year. There were a lot of challenging things. AI-made music spreads. Two studies came out this year, one from a lab and one from the commercial company Deezer. They found respectively that between 61% and 97% of people cannot tell the difference between AI-generated music and human-generated music with today’s models. The Deezer study is more recent and used music generated by the most current models, like the ones from Suno. A song by the AI band Breaking Rust was topping Spotify charts. Again, 97% of people can’t tell in a blind taste test whether the song they’re listening to was made by a machine or a human.
This is in part because commercial music is so templated that there is a formula for it, and AI, being good at patterns, can replicate that template faithfully. But it does raise concerns about the ability of independent creators to earn a living when a machine can make something that tops charts. Music listening is a zero-sum game. Every minute you spend listening to a song made by a machine is a minute you cannot spend listening to a song made by a human. That quarter of a penny a human might have earned from a stream was instead consumed by a machine.
Another challenge was the digital avatar Tilly Norwood, created by the AI lab Particle 6, which got contractual work to appear in some productions. This is a consistent, synthetically generated character made to resemble a human female. For entertainment companies, this makes a lot of sense. This is a synthetic actor that doesn’t get paid, doesn’t collect royalties, never gets sick, shows up for rehearsal on time, and doesn’t cause PR nightmares. As with audio, as entertainment shops start looking at AI technology, this looks very appealing to them because you don’t have to worry about paying a human being and dealing with royalties.
Employment was the big meta-level topic with AI this year. The study from Stanford’s Digital Economy Lab showed that for jobs with exposure to AI, like software development or customer service, there was a market impact. There was between a 16% and 20% decline in overall employment, particularly for people early in their careers, compared to prior to the release of generative AI. You don’t see those changes in other career positions like first-line production supervisors or health aides, but you do see a marked decline in hiring for marketing, sales, software development, and customer service.
The earlier you are in your career, the more likely you are to not get hired or lose your job because companies are substituting those positions with AI. Those positions at the beginning of a career are typically very rote tasks. When I worked at a PR firm, the lowest-level person, an account coordinator, basically had four jobs: take notes during client calls, copy-paste search results into a Google Sheet, do a first draft of press releases, and fetch coffee. Of those tasks, the only one a machine can’t easily replicate today is getting coffee, and managers can get their own. The other three tasks machines have consumed. You would see a dramatic reduction in the number of people in that role because you only need one or two people to manage the machines doing that work, rather than a team of 50 people.
This has the negative impact of dramatically reducing hiring. We don’t have a good societal answer to what to do with those people now that their work has been consumed by machines. There are many theories, like universal basic income or a robot tax, but we currently don’t have a solution. And we need one soon. That bracket of 22- to 25-year-old people experiencing record unemployment means that those people are not going to be available as mid-career professionals in two to three years, and in five to ten years, they will not be available as executives because they will not have had the time in the seat to learn your business. At a societal level, we need to come up with some answers for this very soon. Historically speaking, when enough people have been displaced from work in a very short period, that’s when things like pitchforks, torches, and guillotines tend to come out.
The index that came out this year that we are all watching very carefully is the Remote Labor Index, or RLI. This index looks at commissioned work. Could an AI agent accomplish big projects at a quality comparable to what a remote human worker would be paid for? Examples would be building an animated video, creating a video game, preparing scientific documents, or building complicated dashboards. These are projects that could cost tens of thousands of dollars and hundreds of hours of time. Can machines do them at a commercially acceptable level?
Right now, with the previous generation of models (this index has not been updated since late summer), you see GPT-5, Sonnet 4.5, and Gemini 2.5 Pro scoring an average between 1% and 2%. It might be tempting to laugh that it can’t do this well. However, if a machine can do 2.1% of your billable work, would that impact your business? For some people, that could be a 2.1% gain because that was work you no longer needed to hire a human for. For other companies, that could represent tens of thousands of dollars in revenue they’re no longer going to get.
And that’s with the previous generation of models. We saw this year that on Humanity’s Last Exam, models went from single-digit success rates to 26% or 37% within a year, getting 4x to 9x smarter. That’s with the native model itself, not the agent framework around it. State-of-the-art agents could bridge this gap very quickly. This chart next year might have a zero after some of these numbers. Instead of 2%, it could be 20%. And now you’re talking real money. If a model can do 20% of commercially acceptable projects, that’s real money. This is the index to keep an eye on over the next year.
Wrapping Up
Next week, in the newsletter, we’re going to spend some time on what’s next in AI. So you’ll want to stay tuned for that.
That’s the year in review. It was quite a year. It was a year, frankly, that shocked me at just how powerful the technology became. Reading the papers coming out of conferences like ICLR and NeurIPS, you are seeing that the advancements happening in labs today are portending a very busy 2026. There are some new, mind-blowing technologies and methods coming out that extend the capabilities of AI.
How Was This Issue?
Rate this week’s newsletter issue with a single click/tap. Your feedback over time helps me figure out what content to create for you.
Here’s The Unsubscribe
It took me a while to find a convenient way to link it up, but here’s how to get to the unsubscribe.

If you don’t see anything, here’s the text link to copy and paste:
https://almosttimely.substack.com/action/disable_email
Share With a Friend or Colleague
If you enjoy this newsletter and want to share it with a friend/colleague, please do. Send this URL to your friend/colleague:
https://www.christopherspenn.com/newsletter
For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here.
Advertisement: The Unofficial LinkedIn Algorithm Guide
If you’re wondering whether the LinkedIn ‘algorithm’ has changed, the entire system has changed.
I refreshed the Trust Insights Unofficial LinkedIn Algorithm Guide with the latest technical papers, blog posts, and data from LinkedIn Engineering.
The big news is that not only has the system changed since our last version of the paper (back in May), it’s changed MASSIVELY. It behaves very differently now because there’s all new technology under the hood that’s very clever but focuses much more heavily on relevance than recency, courtesy of a custom-tuned LLM under the hood.
In the updated guide, you’ll learn what the system is, how it works, and most important, what you should do with your profile, content, and engagement to align with the technical aspects of the system, derived from LinkedIn’s own engineering content.
👉 Here’s where to get it, free of financial cost (but with a form fill)
12 Days of AI Use Cases
The annual series has debuted on the Trust Insights blog!
ICYMI: In Case You Missed It
Here’s content from the last week in case things fell through the cracks:
Never Run Out of LinkedIn Content Ideas Again: A Simple AI-Powered Strategy
Discover Why MarketingProfs B2B Forum Feels Like Home for Marketing Professionals
Why Passive Voice Kills Clarity and How AI Can Save Your Writing
Unlocking Disney-Level Event Success: Why MAICON’s Small Touches Create Magic
Almost Timely News: 🗞️ How to Update Old Content With AI (2025-12-07)
On The Tubes
Here’s what debuted on my YouTube channel this week:
Skill Up With Classes
These are just a few of the classes I have available over at the Trust Insights website that you can take.
Premium
Free
👉 New! From Text to Video in Seconds, a session on AI video generation!
Never Think Alone: How AI Has Changed Marketing Forever (AMA 2025)
Powering Up Your LinkedIn Profile (For Job Hunters) 2023 Edition
Building the Data-Driven, AI-Powered Customer Journey for Retail and Ecommerce, 2024 Edition
The Marketing Singularity: How Generative AI Means the End of Marketing As We Knew It
Advertisement: New AI Book!
In Almost Timeless, generative AI expert Christopher Penn provides the definitive playbook. Drawing on 18 months of in-the-trenches work and insights from thousands of real-world questions, Penn distills the noise into 48 foundational principles—durable mental models that give you a more permanent, strategic understanding of this transformative technology.
In this book, you will learn to:
Master the Machine: Finally understand why AI acts like a “brilliant but forgetful intern” and turn its quirks into your greatest strength.
Deploy the Playbook: Move from theory to practice with frameworks for driving real, measurable business value with AI.
Secure Your Human Advantage: Discover why your creativity, judgment, and ethics are more valuable than ever—and how to leverage them to win.
Stop feeling overwhelmed. Start leading with confidence. By the time you finish Almost Timeless, you won’t just know what to do; you will understand why you are doing it. And in an age of constant change, that understanding is the only real competitive advantage.
👉 Order your copy of Almost Timeless: 48 Foundation Principles of Generative AI today!
Get Back to Work
Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you’re looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.
Ai Vp / Director / Principal, Financial Services at Intellias
Generative Ai - Technical Product Demonstrations And Testing at Recco Consulting Inc.
Vp, Product Management & General Manager - Ai & Platform at Outreach
Advertisement: New AI Strategy Course
Almost every AI course is the same, conceptually. They show you how to prompt, how to set things up - the cooking equivalents of how to use a blender or how to cook a dish. These are foundation skills, and while they’re good and important, you know what’s missing from all of them? How to run a restaurant successfully. That’s the big miss. We’re so focused on the how that we completely lose sight of the why and the what.
This is why our new course, the AI-Ready Strategist, is different. It’s not a collection of prompting techniques or a set of recipes; it’s about why we do things with AI. AI strategy has nothing to do with prompting or the shiny object of the day — it has everything to do with extracting value from AI and avoiding preventable disasters. This course is for everyone in a decision-making capacity because it answers the questions almost every AI hype artist ignores: Why are you even considering AI in the first place? What will you do with it? If your AI strategy is the equivalent of obsessing over blenders while your steakhouse goes out of business, this is the course to get you back on course.
How to Stay in Touch
Let’s make sure we’re connected in the places it suits you best. Here’s where you can find different content:
My blog - daily videos, blog posts, and podcast episodes
My YouTube channel - daily videos, conference talks, and all things video
My company, Trust Insights - marketing analytics help
My podcast, Marketing over Coffee - weekly episodes of what’s worth noting in marketing
My second podcast, In-Ear Insights - the Trust Insights weekly podcast focused on data and analytics
On Bluesky - random personal stuff and chaos
On LinkedIn - daily videos and news
On Instagram - personal photos and travels
My free Slack discussion forum, Analytics for Marketers - open conversations about marketing and analytics
Listen to my theme song as a new single:
Advertisement: Ukraine 🇺🇦 Humanitarian Fund
The war to free Ukraine continues. If you’d like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia’s illegal invasion needs your ongoing support.
👉 Donate today to the Ukraine Humanitarian Relief Fund »
Events I’ll Be At
Here are the public events where I’m speaking and attending. Say hi if you’re at an event also:
Social Media Marketing World, Anaheim, April 2026
There are also private events that aren’t open to the public.
If you’re an event organizer, let me help your event shine. Visit my speaking page for more details.
Can’t be at an event? Stop by my private Slack group instead, Analytics for Marketers.
Required Disclosures
Events with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them.
Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them.
My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well.
Thank You
Thanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness.
See you next week,
Christopher S. Penn







