Almost Timely News: šļø Cognitive Offloading and AI (2025-09-28)
How AI changes our brains for good and ill
Almost Timely News: šļø Cognitive Offloading and AI (2025-09-28) :: View in Browser
The Big Plug
š¬š§ There are only 2 seats left for my full day workshop in London, England, on 31 October!
šļø Register for our new online course, the AI-Ready Strategist, now available!
Content Authenticity Statement
100% of this weekās newsletter was generated by me, the human. You will see bountiful AI outputs in the video. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future.
Watch This Newsletter On YouTube šŗ
Click here for the video šŗ version of this newsletter on YouTube Ā»
Click here for an MP3 audio š§ only version Ā»
Whatās On My Mind: Cognitive Offloading and AI
In the last few months, Iāve had a lot of conversations with educators and education-adjacent professionals about the impact of AI on education, specifically focused on how we learn. Their concerns boil down to a single phrase: cognitive deskilling, or cognitive offloading.
This is essentially outsourcing parts of our cognitive capabilities to external entities. Cognitive offloading is nothing new. Weāve been doing it for millennia - the very act of writing offloads things from our memory so we donāt have to remember everything. Those first clay and papyrus journals, the code of Hammurabi, all that was cognitive offloading of data that was important, but wasnāt necessarily stuff we needed to keep in mind all the time.
As humans, weāve cognitively offloaded a great many tasks. We hire people when we have too much work to do ourselves. We bring in consultants, we consult doctors and dentists who are more skilled at restoring our health. Itās not only nothing new, itās generally a good idea.
The challenge that everyoneās struggling with today around AI is twofold: first, being able to explain their concerns in concrete, specific ways (āitās making us dumberā is neither concrete nor specific) and more important, coming up with solutions that acknowledge AIās very large role in society and business, where skilled workers will be expected to have AI skills.
Iāll say up front that outright banning it is probably not the best choice, in the same way that outright banning anything tends not to be effective except in draconian, authoritarian societies where authorities monitor your every word and action. In those places, you can certainly control the use of AI.
Part 0: My Biases And Disclosures
Before we dig in, Iāll be clear about my biases, which you may not know, especially if youāre a new friend. I am strongly pro-AI as a technology, though I generally hold deep reservations about the ethics of the various companies that make it. I believe, properly used, it dramatically expands our capabilities as humans, from reducing manual drudgery (like filling out expense reports) to giving our creative thoughts outlets we might not have the skills to access (like song writing and painting for people who have neither skill).
Many of the concerns about issues like sustainability are well on their way to being resolved, not because technology companies especially care about the environment, but because every watt of power and drop of water they consume, they have to pay for. Simple economic dynamics makes a focus on reducing costs (and therefore environmental impact) align.
Finally, I believe that anyone barring people (but especially students) from learning how to use AI well is actively harming their chargesā employment prospects. AI is a necessary skill today in knowledge work, right up there with the ability to type and use a search engine. As with the Internet, social media, mobile phones, etc., itās not a fad that will vanish.
I share these biases up front so you know what lens I view AI through.
In terms of disclosures, I am an AI expert. I am not a neuroscientist, nor am I a certified educator in the same sense of a K-12 teacher. While I am a parent (and thus a consumer of the education system twice over, myself and my kids), I have no formal education training that would qualify me as a full-time professional teacher. I say this so that when I get concepts wrong in either domain - neuroscience or education - you know why.
Part 1: Human Cognition
Before we can talk about AIās impact on cognition, we have to understand the basics of cognition itself. Broadly speaking, there are 8 major domains of human cognition - and this is one of those areas where if you ask 100 neuroscientists for a definition of cognition, youāll get 250 answers.
The domains are:
Perception: how we get information into our heads. Typically associated with the five senses, in the digital age this also is mental perception, how we perceive information.
Memory: how we get information out of our heads. What and how do we recall information for later use?
Attention: how we focus on what information weāre putting into our heads. What do we pay attention to?
Language: how we format information both for storage and communication. Language is how we create structure around information.
Learning: how we get information into our heads in structured, progressively more complex ways. When we learn, what and how do we learn?
Social cognition: how we interface with external entities, especially other people. Social cognition is how we navigate our role in human societies.
Executive function: how we operate and show up in the world, our ability to plan, organize, decide, and solve problems (PODS).
Creativity: how we create new information out of old information, sometimes in novel ways.
Whatās important to remember is that human cognitive processes are highly interdependent. Rarely do we ever use one of these domains in isolation.
Consider a simple example. You get together with a friend for coffee. In that simple exchange, you have to perceive your friendās state, remember their previous state, pay attention to them, speak language to them, correctly interpret non-verbal communication, choose what kind of coffee you want, think about whether or not youāll pay for your friendās coffee, and find ways to communicate that navigate through whatever troubles your friend might be having.
Now that weāve VERY briefly reviewed the basics of human cognition, letās review the state of AI broadly and quickly.
Part 2: The General State of AI
When we say AI, to be clear, weāre talking specifically about generative AI. Generative AI are AI models that make stuff - words, images, music, video, code, etc. Weāre not talking about classical AI (things like decision support systems, attribution analysis, etc.). Generative AI is a relatively new phenomenon, made popular by tools like ChatGPT.
The reason people have concerns about generative AI, particularly as it relates to human cognition, is how amazingly capable it is and how quickly itās evolved. When ChatGPT first hit the scene in November of 2022, it was competent at putting words together, but it had almost no ability to understand factual information, and could only produce text.
Over the years, AI companies have added more and more capabilities to their AI models and tools.
The tools and models are so powerful today that weāre talking about cognitive offloading and deskilling. If they were still incompetent, no one would use them for those purposes.
Today, tools like ChatGPT, Google Gemini, and others can generate video, audio, text, images, write code, and reason through complex problems, often arriving at conclusions that are faster and better than their human counterparts. For example, one of the evaluation benchmarks for AI is a test called Diamond GPQA, a complicated test that requires extensive domain knowledge and reasoning. Human PhDs score approximately an 80 on a scale of 0-100 in their domain of expertise, and typically 25 or so outside their domain (random chance).
Most foundation AI models like GPT-5, Claude Opus 4.1, Gemini 2.5, and others now routinely score above an 80 on Diamond GPQA, indicating that at least for the domains of knowledge in that test (mostly math and science), they are capable of operating at above PhD levels of competence.
That means cognitive offloading - handing over tasks and work to AI - is a perfectly rational thing to do. In the abstract, why would you not delegate a task to someone whoās better at that task than you are? In the business world, we frequently say that a good manager hires someone as good as they are at a task, and a better manager hires someone better than they are.
AI is that better hire in many cases.
Part 3: AI Impact on Cognition
Now we can get to the heart of the issue. How does cognitive offloading show up, in terms of AI? Letās go back to the 8 domains.
Perception
Perception is how we get information into our heads. In the context of generative AI, AI can have a HUGE impact on perception - not necessarily on our base senses themselves, but on information created for them.
In a word? Deepfakes.
Generative AI tools are so capable, so fluent at generating images, music, sound, and words that they can substantially change our perception of the world.
And while there exist many standards like C2PA and SynthID that can identify and tag synthetic information, the reality is that once information that hits our senses - even when we KNOW itās fake - it still gets encoded and stored in our brains. Thatās how our perception works; once we perceive something and it enters our neural network between our ears, itās very difficult to dislodge. This is doubly true once you start talking about things like emotion.
Emotion generates neurochemical responses like adrenaline that encode memories faster and stronger; one of the reasons that āall nighterā study sessions persist is not because we do a good job of filling our short term memory by lengthy study, but because the stress and strain can increase adrenaline, which in turn aids with memory recall. (Often, however, that is counteracted by fatigue)
What that means is a fake image of something you have an emotional response to is more likely to get encoded in your memory than a real image that you donāt have an emotional response to, making the impact of things like deepfakes far greater.
Memory
Neuroscientist Daniel Willington had a great quote back in 2007: āmemory is the residue of thoughtā. In context, he was pointing out that many modern education methods such as memorization and rote did a poor job of helping students retain information because they didnāt have to experience it, to reason through it, to think.
Memory has three major components - declarative (remembering facts and data), prospective (remembering tasks and events), and working memory (short term).
Generative AI extends an existing trend thatās been the case for half a century: we offload a tremendous amount of memory to machines. For example, most people donāt remember more than a few phone numbers today, whereas people in my generation (GenX) had to remember and recall lots of phone numbers. I can still remember the phone number at my childhood house because I had to recall and use it so many times.
Search engines extended this offloading in the last 30 years. We no longer had to remember discrete pieces of information - we just had to remember to Google for it.
Generative AI extends this even further. Today, you donāt even have to think about querying a search engine. You just ask AI, and modern AI tools (powered by web search) can often fish up exactly what youāre looking for. We offload memory and the retrieval process to AI.
Attention
What we pay attention to, as with perception, can be guided to a degree by AI, especially in conversational chats. When AI algorithms filter data to present you with only what you want, you develop a distorted view of reality, a filter bubble. Everything you see and hear is positioned as things you agree with are good, things you disagree with are bad.
Why does this happen? Because itās what we want. In every interaction with AI, we signal both implicitly and explicitly what it is we want more of, and what we want less of. And the noisier and more overwhelming reality is, the more likely it is we want to build these walls around our attention, preserving this most scarce resource for the things that matter to us most.
As AI agents become more and more popular, autonomous code that can go out and find just what we want, we offload more of the attention process to AI.
Language
Even language and our ability to use language is changing in an AI-forward world. Recently, an analysis of speeches given by Ministers of Parliament (MPs) in the UK revealed something fascinating: since 2023, UK political speeches have incorporated more and more āAmericanismsā in their speech.
Why? In a word, ChatGPT; MPs drafting their remarks with ChatGPT introduces language thatās influenced heavily by OpenAIās US English trained models. The more they offload tasks like speech writing, the more their language changes to what AI models use, which tends to be US English in nature.
We see the same thing in coding; the most popular programming languages become even more popular because AI is the most fluent in them. A tool like Claude or Gemini or ChatGPT will be far better at Python than Julia or Scala, simply because thereās so much more training data for it to learn from.
Learning
Learning is the process of acquiring, storing, manipulating, and recalling knowledge for use (ASMR). In each of these four areas, generative AI can have substantial impacts.
At a fundamental level, AI impacts learning by allowing us to offload many of the learning tasks and skills, or even bypassing the skills entirely. For example, I donāt have a musical bone in my body. I canāt hold a tune, I canāt play any instruments, sheet music may as well be Babylonian script, and I lack even the vocabulary to describe music beyond the most primitive basics.
Yet if I take a tool like Suno or Riffusion, along with an idea, I can have AI produce songs. Theyāre unlikely to be Grammy winners ever, but theyāll be substantially better than what I could produce.
In this case, Iām not cognitively offloading the learning process - Iām negating it entirely because I no longer have to do it to create a song about the thing I love. I no longer need to learn musical skill at all to create better than average music.
Social Cognition
Social cognition is one area where AIās impact is profound and potentially damaging. The single largest use case for generative AI, particularly language models, inside and outside of work as of 2025 is therapy and companionship use cases.
People use ChatGPT as a therapist.
While there have been a handful of very high profile cases where this has gone disastrously wrong, the use is so widespread that whatās remarkable is that there havenāt been more disastrous outcomes. ChatGPT alone has somewhere around 700 million weekly users; put in context, if ChatGPT had the same level of risk as automobiles in the USA, there would be 2,100 weekly fatalities.
The greater danger when it comes to cognitive offloading and deskilling is how interacting with machines changes our interactions with humans. All large language models are trained on roughly the same three core principles: harmless, helpful, truthful. Harmless means to not respond to obviously dangerous requests like how to build or do a Very Bad Thingā¢. Helpful means to fulfill the userās requests to the best of its ability, and truthful means to return factually correct data.
When you have a conversation with a tool like ChatGPT, it will try its best to be helpful, polite, civil, even friendly sounding. Some people have called it sycophantic, which isnāt entirely inaccurate. As a result, we find our interactions with it to be inoffensive, even pleasant.
And that makes interacting with real people harder, because real people arenāt reliably helpful, polite, civil, and friendly 100% of the time.
Itās basic human nature to seek pleasure and avoid pain - and that means given the choice between real, unreliable, unpredictable humans and predictable machines, a fair number of people are already choosing machines.
Executive Function
This is where we see some of the greatest impacts of generative AI, on executive function. Remember that executive function comprises four major areas that I call PODS: plan, organize, decide, and solve.
When we plan, we have to think ahead to the future. We have to figure out what a plan could look like and what outcomes we are after. When we organize, we have to take data that might be disparate, information that might be conflicting, and bring it into some kind of order. When we make decisions, we have to weigh the consequences of our decision, the different outcomes that we might want. And when we solve problems, we have to in some cases think very counterintuitively or creatively to arrive at a solution to our problems.
When we hand these tasks off to AI and cognitively offload them completely, we diminish our own skills.
Think about the number of times you have seen someone share a prompt on LinkedIn or similar social networks that goes along the lines of āBuild me a quarter by quarter marketing plan for influencer marketing for our company.ā
Letās put aside whether thatās a good prompt or not for the moment (itās not) and discuss what the intent is here. The intent is to have generative AI plan, organize, make decisions, and solve the problems that weāve put before it. Todayās tools will do that exceptionally capably. As we discussed earlier, they have greater than PhD levels of capabilities in virtually every domain and profession.
What happens to us when we delegate that task entirely? You know the answer to this. If youāve been in the workforce for any amount of time and moved up to a non junior position, you know that the individual skills and tasks that you used to do as a junior member of the team atrophy as you delegate them to your juniors. When you become the manager of a PR team, you can let yourself get rusty at the skill of writing a press release, for example.
In turn, that means everyone whoās completely delegating those tasks risks the same level of atrophy. In the context of education, students who immediately default to using tools like ChatGPT for these kinds of tasks risk not learning these skills at all.
Creativity
The final area of human cognition we need to discuss when it comes to generative AI is creativity. There are two basic forms of creative thinking. Divergent thinking, where we explore idea spaces, and convergent thinking, where ideas come together to a final form.
Divergent thinking is what we know as brainstorming, where we try to come up with as many ideas as possible to solve a particular problem or to create something.
Convergent thinking is what we know as creative decision making, where we look at an array of ideas and we select the best ones that fit the project or brief that weāre working on.
As with executive function, if we completely outsource divergent and convergent thinking to generative AI, we risk losing our edge with those skills. For divergent thinking, we potentially reduce our ability to brainstorm effectively when we ask machines for lots of ideas instead of coming up with our own. For convergent thinking, itās the same as executive function. If we hand off decision making to machines, then we reduce our ability to make those same decisions and to think through carefully the ramifications and implications of our decisions.
Going through everything that weāve talked about so far, this can seem pretty disheartening. If we just stop here, you might very well want to ban AI in your household, your schools, or in your workplace. It seems like incredibly problematic technology that enables laziness and reduces our capacity as humans.
But thatās not whatās happening here. One of the things that I lecture about most in generative AI is that it is an amplifier. To quote from the original Captain America movie, it takes the good, it makes it better. It takes the bad, it makes it worse.
So letās apply that thinking to this particular problem space.
Part 4: Restoring Humanity with AI
To have a meaningful discussion about whether or not AI is appropriate for any given task or person or situation, we have to have a discussion about what is and is not worth offloading.
In the first years of the generative AI revolution, Katie Robbert and I came up with the TRIPS framework for deciding what tasks to offload to generative AI. TRIPS stands for time, repetitiveness, importance, pain, and sufficient data. The best tasks to outsource to AI are those that are incredibly time intensive, incredibly repetitive, not particularly important, very painful, and those things that you have sufficient examples so that AI knows how to perform them.
Through this lens it should be abundantly clear what tasks we should and shouldnāt outsource to AI. Those tasks that are extremely important, that require a lot of validation, or are extremely high risk, like finance, law and health, are tasks that should remain in the human domain at least partially, and always as a final review.
Those tasks that cause us no pain and that we take great joy in should absolutely remain in the human domain, tasks like painting or cooking or writing. If you enjoy these activities, thereās no reason to turn them over to machines.
Those tasks that are painful and repetitive and time consuming, like expense reports, should absolutely be the domain of AI.
Now hereās where we get to very uncomfortable questions, particularly in the education space. How many of the tasks that we assign to students fit in the TRIPS framework as being time intensive, highly repetitive, not particularly important, extremely painful for the student, and have so many examples of how itās already been done?
For example, the five paragraph essay, which is a staple of English language education in the USA, is an absurdly over templated and overengineered teaching tool for teaching students how to write well.
If youāre unfamiliar, the five paragraph essay is an introduction, three body paragraphs and a conclusion. Students are typically forbidden to use contractions, informal language, or writing from anything other than third person neutral point of view.
Itās been drilled into students to the point where they believe that is the only way to write, not just for high school, but also in higher education and in careers. The five paragraph essay is a good starting template for people just learning how to write, but rigid adherence to it ultimately ends up stifling creativity and producing mediocre writers.
Is it any wonder that students would resort to using ChatGPT or similar generative AI tools that are perfectly engineered to solve this particular problem of a rigid template that is time-intensive, highly repetitive, not particularly important, painful, and has thousands of examples to draw from? If that was in the business world, the student would be rewarded for handing off a task like that to generative AI to make them more productive.
So whatās the solution here, the path forward? In the biggest possible picture, the rule of thumb that I go with is that if you are asking AI to do the thing, youāre offloading and potentially diminishing your skills. If AI is asking you to do something, you are enhancing your abilities. You are expanding your mind. That rule of thumb should help you understand when you are putting your skills at risk versus when you are enhancing your skills by having AI challenge you.
The three fundamental skills that people need to have to work with AI successfully are what I call the 3 Cās: creative, critical, and contextual thinking.
Creative thinking is exactly what it sounds like. The ability to think creatively and to ask creative questions of AI. and to have AI ask creative questions of you.
Critical thinking is the ability to challenge AI and to ask AI to challenge us. In an era when we can create just about anything that resembles reality, critical thinking is more important than ever.
Contextual thinking is the ability to know what data is needed to solve a problem, whether by AI or not, and to know where that data lives and to know how that data plays a role in solving the problem.
Letās step back through the eight domains of cognition and see how the 3 Cs show up.
In perception, critical thinking is one of the most important skills, looking at the data we are ingesting and actively challenging it. Is a photo true? Is a video true? Are there tells that could indicate that part or all of it have been generated by AI? If something is generated by AI, what was the intent of the creator? Was it to entertain, to deceive, to persuade?
When it comes to memory, contextual thinking is vitally important. Memory is the residue of thought. So if weāre not thinking, weāre not remembering. How do we do a task? Can we do the task well enough that it becomes almost like muscle memory? Do we know where the information lives to perform the task successfully? In the context of education, are we teaching in an experiential way that allows people to do the thinking that builds memories? Not by memorizing things, not by filling out templates, but having real world, real life experiences.
For attention, critical thinking is vitally important. How often are our own views being challenged? How is our attention being directed or diverted by the sources of information that we pay attention to? What is the intent of the people who are providing us information, either directly or through generative AI? How do generative AI tools direct our attention and give us suggestions for what to pay attention to or not pay attention to?
One of the most useful skills in generative AI is the ability to ask the tool for things it knows about that we donāt, that we havenāt paid attention to. As an example, in coding, there may be entire utilities or libraries that exist to solve the precise problem youāre trying to solve, but you may not know they exist. You can ask AI what youāre not paying attention to that you should be, that would make the task easier or solve the problem better.
In language, creative thinking and contextual thinking are equally important to understand the language that we use and what it means and whether we are communicating effectively. Truly creative writing often has lots of low probability words and phrases in it that makes our individual language unique. It sets us apart in terms of writing style. When people use generative AI to create text, unless that tool has been specifically trained or prompted on an authorās unique style, it tends to write high probability language - boring, predictable, and bland.
Learning is an area where generative AI can be one of our most powerful allies. We have to approach it as a thought partner rather than as a thought doer. We can give generative AI a piece of text, a lesson plan, a textbook, and ask it to reframe and rephrase our learning into a format we can understand. For example, perhaps you donāt know anything about quantum superposition, but you are exceptional at music composition. With a single prompt like this:
Explain the concept of quantum superposition in the context of music composition, especially the music of Taylor Swift.
You could get this:
In the context of music, superposition can be understood as the capacity for a piece of musicābe it a lyric, a melody, or a chord progressionāto hold multiple potential meanings or emotional states for the listener. Just as a quantum particle exists in a combination of states, a musical element can evoke a spectrum of feelings and interpretations at the same time.
The key here, particularly for educators, is to understand that generative AI can take any topic that we want to teach and reformat it into the subject areas and domains of knowledge that our students have that they care about. You could reform an entire sixth grade mathematical curriculum purely for Minecraft, and students would understand each lesson deeply AND get a chance to put it in action immediately.
For social cognition, one of the best ways to teach that generative AI isnāt real is to have people watch the technology work. In the same way that a magic trick loses some of its luster when you know how the trick is done, watching generative AI respond and create tokens and do probability selection in real time eliminates the anthropomorphizing that we often do with generative AI. We see that it is just a prediction engine and not a real person, no matter how convincing its words might be.
For executive functions, the best thing we can do here is to have generative AI not make decisions for us, but give us options to evaluate. The more we can have AI come up with things for us to review, things for us to make decisions about, things for us to guide it, and less just completely delegating, the more it helps preserve our executive function to plan, organize, decide, and solve.
Instead of saying, āWrite me a marketing strategy for my influencer marketing for the next quarterā, we might instead say, āDevise three to seven different strategies for influencer marketing appropriate for our business for the next quarter. Show me the pros and cons of each of the strategies so that I can evaluate them and decide which strategy makes the most sense for us.ā In this way we are preserving our executive function rather than delegating it. We are still making decisions. We still have to plan, organize, decide and solve based on the information AI gives to us, in the same way that a junior employer would present that information to us and wait for us to make a decision.
Finally, when it comes to creativity, one of the best things we can do is look at the skills AI gives us access to in domains we donāt have expertise in. Maybe youāre a terrific painter, but youāve never played a musical instrument. Maybe you are a pianist, but youāve never picked up a piece of charcoal. The more we can explore new domains that we donāt have the skills for, but we have the ability to think creatively in based on our existing knowledge bases, the more we can embrace lateral thinking. Lateral thinking is the process of taking our expertise from one domain and applying it to another.
In the context of education, you might say to a student, āhey, take a winning concept from Apex Legends for how you won a 50-person battleground and apply it to this lesson on social studiesā. Do you see how that concept - and you could use AI to assist you in understanding it - could dramatically help someone learn more creatively?
As with learning, generative AI can be a fantastic thought partner here. It can help explain the nuances of a domain that we donāt understand in the context of a domain we do understand and then generate those results. More important, it allows us to communicate with other people in ways that fit how their brains work. We may be creative in our domain, but struggle to explain that creativity outside our domain.
Part 5: Wrapping Up
The part that people donāt like to say out loud, particularly when it comes to education, is that for the most part, most education systems in most countries are following an education system designed for the previous century.
Where I live, in the USA, the education system is an artifact of the 1920s and 1930s, funded and influenced heavily by Rockefeller, Carnegie, Ford, and Mellon, the robber barons of the era. The so-called modern education system was designed to create competent, obedient factory workers. Thatās the ideal end state product.
The batches of product are called grade levels. The QA for the product is called standardized testing. And we move product from grade to grade as though it was product on assembly lines, trying to reduce defects and getting everything to a standard level of quality.
The workers who were the product of that system did well in manufacturing factories and were able to succeed economically and socially. But as time went on, and factory jobs moved overseas or became automated, and knowledge work exploded, and the internet arrived, the education system itself didnāt adapt to the new reality.
Today we still teach as though we were making obedient factory workers - but the factory jobs are gone, and have been for a long time. There are some factory jobs like tool and die that barely exist at all in the USA. These cracks in the system have been there for a while, but generative AI, as itās doing with everything else, is accelerating the strain and highlighting that the system itself isnāt working to create the workers and the students of tomorrow.
When ChatGPT can do a better job with a five part essay than a human student can, the question should be less whether or not using ChatGPT is cheating and more a question of whether we should be teaching the five part essay at all.
There are places that are doing this well. The local university near me, in its senior seminar on psychology, does something really innovative. The professor puts students in groups of five. One student uses a generative AI tool like ChatGPT to generate a term paper. And then the class exercise is for the other four students in the group to critique the term paper and understand what the generative AI system did well, what it did poorly, what it missed, and what was excessive.
You can see how that employs creative thinking and critical thinking and contextual thinking by the students to quality check the work that the machine is doing and understand where the machine goes wrong. Itās teaching management skills. In todayās AI forward world, we are all managers. Some of us are managing people, but all of us are managing machines.
Iāll conclude by saying that this is a topic that youāre not going to solve in one newsletter or even in one year. And itās interwoven with a lot of complexity - if you stick with the current manufacturing model of education, then AI endangers the jobs of many teachers because machines can do those same manufacturing jobs (manufacturing obedient factory workers), but better, faster and cheaper. If we want to value educators appropriately, we have to change the education system itself to adapt for todayās world and tomorrowās world and not the world of yesterday.
Itās incumbent upon all of us to decide what education looks like now and going forward, leaving behind structures and systems built for a world that no longer exists. Thatās ultimately how we avoid cognitive deskilling and cognitive offloading - not by mandating that we retain skills tuned for a world that no longer exists, but using our very human capabilities in partnership with our machines to address the challenges of today and tomorrow.
How Was This Issue?
Rate this weekās newsletter issue with a single click/tap. Your feedback over time helps me figure out what content to create for you.
Hereās The Unsubscribe
It took me a while to find a convenient way to link it up, but hereās how to get to the unsubscribe.

If you donāt see anything, hereās the text link to copy and paste:
https://almosttimely.substack.com/action/disable_email
Share With a Friend or Colleague
If you enjoy this newsletter and want to share it with a friend/colleague, please do. Send this URL to your friend/colleague:
https://www.christopherspenn.com/newsletter
For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here.
Advertisement: London 2025 Event On 31 October
THERE ARE ONLY 2 SEATS LEFT
Are you tired of the generative AI hype? As a marketer, youāre likely facing pressure to implement AI, but find that most training focuses on chatbot tricks rather than real-world strategy. Itās a common frustration: a lot of noise, but very little signal on how to drive measurable business resultsāespecially for the complex challenges we face in B2B marketing.
To help you address this, Iām running a SMALL, full-day, in-person workshop in London on 31 October: Generative AI for B2B Marketing Leaders. This isnāt a theoretical lecture. Itās a hands-on strategy session where we will move beyond the hype and build practical solutions to your most pressing challenges, from accelerating your content pipeline to generating deeper market intelligence.
This is a ālearn-by-doingā event. Youāll need your laptop, as weāll be working through exercises together. To ensure you can participate fully without confidentiality risks, we provide a complete set of safe, synthetic business data. You will leave with a personal ācookbookā of prompts youāve tested yourself, a copy of my book Almost Timeless, and a clear, actionable plan to implement back at the office.
Seats are strictly limited to 25 people to ensure a high-quality, interactive experience customized to you, answering your specific burning questions about making generative AI work for you. (if I get more than 25 registrations, weāll open a waitlist)
š See the entire agenda and register here.
Canāt see a working link? Copy and paste:
https://cspenn.gumroad.com/l/genai-uk-workshop
I hope to see you in London. If you canāt attend, or youāre not the right person for this email, you have my thanks in advance for forwarding it to them.
ICYMI: In Case You Missed It
Hereās content from the last week in case things fell through the cracks:
Why Your AI Is Probably Smarter Than You Think (And What It Means for Your Work)
Vibe Coding: How to Avoid Over-Engineering and Build Smarter, Not Harder
How Generative AI Silently Devalues Female Voices: A Disturbing Case Study
How AI Is Transforming the Job Market: Whoās Losing and Whoās Gaining
Build on Land People Trust: Why āDonāt Build on Rented Landā Still Rules in the Age of AI
Almost Timely News: šļø Using AI for Analytics (2025-09-21)
In-Ear Insights: Do Awards Still Matter in Marketing and PR?
On The Tubes
Hereās what debuted on my YouTube channel this week:
Skill Up With Classes
These are just a few of the classes I have available over at the Trust Insights website that you can take.
Premium
Free
New! Never Think Alone: How AI Has Changed Marketing Forever (AMA 2025)
Powering Up Your LinkedIn Profile (For Job Hunters) 2023 Edition
Building the Data-Driven, AI-Powered Customer Journey for Retail and Ecommerce, 2024 Edition
The Marketing Singularity: How Generative AI Means the End of Marketing As We Knew It
Advertisement: New AI Book!
In Almost Timeless, generative AI expert Christopher Penn provides the definitive playbook. Drawing on 18 months of in-the-trenches work and insights from thousands of real-world questions, Penn distills the noise into 48 foundational principlesādurable mental models that give you a more permanent, strategic understanding of this transformative technology.
In this book, you will learn to:
Master the Machine: Finally understand why AI acts like a ābrilliant but forgetful internā and turn its quirks into your greatest strength.
Deploy the Playbook: Move from theory to practice with frameworks for driving real, measurable business value with AI.
Secure Your Human Advantage: Discover why your creativity, judgment, and ethics are more valuable than everāand how to leverage them to win.
Stop feeling overwhelmed. Start leading with confidence. By the time you finish Almost Timeless, you wonāt just know what to do; you will understand why you are doing it. And in an age of constant change, that understanding is the only real competitive advantage.
š Order your copy of Almost Timeless: 48 Foundation Principles of Generative AI today!
Get Back to Work
Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If youāre looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.
As a side note, one thing Iāve noticed as I put together the jobs list for each week in the newsletter is how many jobs are now having AI in the job description somewhere, even if itās not a technical job, and especially in jobs that were previously just analytics job, and now seems to be analytics and AI, data and AI, etcetera. Itās a kind of an anecdotal observation, but itās worth paying attention to.
Advertisement: New AI Strategy Course
Almost every AI course is the same, conceptually. They show you how to prompt, how to set things up - the cooking equivalents of how to use a blender or how to cook a dish. These are foundation skills, and while theyāre good and important, you know whatās missing from all of them? How to run a restaurant successfully. Thatās the big miss. Weāre so focused on the how that we completely lose sight of the why and the what.
This is why our new course, the AI-Ready Strategist, is different. Itās not a collection of prompting techniques or a set of recipes; itās about why we do things with AI. AI strategy has nothing to do with prompting or the shiny object of the day ā it has everything to do with extracting value from AI and avoiding preventable disasters. This course is for everyone in a decision-making capacity because it answers the questions almost every AI hype artist ignores: Why are you even considering AI in the first place? What will you do with it? If your AI strategy is the equivalent of obsessing over blenders while your steakhouse goes out of business, this is the course to get you back on course.
How to Stay in Touch
Letās make sure weāre connected in the places it suits you best. Hereās where you can find different content:
My blog - daily videos, blog posts, and podcast episodes
My YouTube channel - daily videos, conference talks, and all things video
My company, Trust Insights - marketing analytics help
My podcast, Marketing over Coffee - weekly episodes of whatās worth noting in marketing
My second podcast, In-Ear Insights - the Trust Insights weekly podcast focused on data and analytics
On Bluesky - random personal stuff and chaos
On LinkedIn - daily videos and news
On Instagram - personal photos and travels
My free Slack discussion forum, Analytics for Marketers - open conversations about marketing and analytics
Listen to my theme song as a new single:
Advertisement: Ukraine šŗš¦ Humanitarian Fund
The war to free Ukraine continues. If youād like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russiaās illegal invasion needs your ongoing support.
š Donate today to the Ukraine Humanitarian Relief Fund Ā»
Events Iāll Be At
Here are the public events where Iām speaking and attending. Say hi if youāre at an event also:
SMPS, Denver, October 2025
Marketing AI Conference, Cleveland, October 2025
MarketingProfs B2B Forum, Boston, November 2025
Social Media Marketing World, Anaheim, April 2026
There are also private events that arenāt open to the public.
If youāre an event organizer, let me help your event shine. Visit my speaking page for more details.
Canāt be at an event? Stop by my private Slack group instead, Analytics for Marketers.
Required Disclosures
Events with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them.
Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them.
My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well.
Thank You
Thanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness.
See you next week,
Christopher S. Penn




Really appreciate the thoroughness of the analysis and framework.
Where I'm curious about the 3C's framework (creativity, critical thinking, contextual thinking) when working with AI is whether the framework could use a nuance that acts as catalyst for the 3C's.
Agree that all 3 dimensions are required to be an active participant in utilizing AI tools. However, where I struggle is that the internal motivation to activate any of the other thinking methods requires something first.
I argue that it's curiosity, the motivation to see new information, new experiences, new understanding that leads to people to question and work with AI in a way that doesn't give any individual agency.
I'm very curious to see where I misunderstand the argument and where I can further learn from these ideas.
As an educator and researcher, this is great work.