Almost Timely News: 🗞️ How I Keep Up With Everything in AI (2026-03-08)
I might be a little crazy
Almost Timely News: 🗞️ How I Keep Up With Everything in AI (2026-03-08) :: View in Browser
The Big Plug
👉 I’ve got a new course! GEO 101 for Marketers.
Content Authenticity Statement
100% of this week’s newsletter content was originated by me, the human, but was cleaned up by Claude Opus 4.6 from my original voice recording. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future.
Watch This Newsletter On YouTube 📺
Click here for the video 📺 version of this newsletter on YouTube »
Click here for an MP3 audio 🎧 only version »
What’s On My Mind: How I Keep Up With Everything in AI
This week’s newsletter is about something I get asked constantly - usually in the hallway after a talk or in the Q&A at the end of a workshop: how do you stay on top of all of this? How do you know what’s real and what’s hype? How do you actually keep up when the field moves this fast?
The answer is that I treat it like a data problem - because that is what it is. If you have ever worked in data engineering, you know the concept of ETL: Extract, Transform, Load. You pull data from sources (extract), you clean and shape it into something meaningful (transform), and then you put it to work somewhere useful (load). That three-part process maps onto how I manage the constant flood of AI news, research, tool releases, and practitioner discussion that flows through my desk every single day.
Part 1 of this newsletter covers extraction - where I actually get my information and how I have structured those sources to give me signal without burying me in noise. Part 2 is transformation - the mental framework I use to make sense of what I find, so I can quickly judge what deserves my attention and what I can safely ignore. Part 3 is loading - what I actually do with the information, from testing new models to building production tools for clients. And because this newsletter covers a lot of technical ground, I put a glossary up front so you have definitions in hand before you need them.
Part 0: Glossary of Terms
This issue covers a lot of ground across AI research, tooling, and workflows - which means it drops a fair number of technical terms along the way. Not everyone lives and breathes this stuff daily, and even experienced practitioners occasionally hit a term they have seen a dozen times without a clean definition. Here is your cheat sheet, up front.
ETL (Extract, Transform, Load): The classic data engineering workflow: pull data from sources (extract), clean and shape it into something useful (transform), then store or deliver it somewhere you can act on it (load). This newsletter borrows that same three-step structure as a framework for staying current on AI.
Foundation model: A large AI model trained on broad data - text, code, images, or all three - that serves as the base for many different tasks. Think of it as a well-educated generalist. GPT-5.4, Gemini 3.1, Qwen3.5, GLM-5, and Claude Opus 4.6 are all foundation models.
Open-weights model: An AI model whose internal parameters have been publicly released so anyone can download and run it locally. Qwen 3.5 is an example. Contrast with proprietary models, where the weights stay locked inside the company that built them.
Local LLM: A large language model you run on your own hardware rather than through a cloud service. Your data stays on your machine and there are no per-use costs. The trade-off: local hardware limits how large a model you can run.
Context window: The maximum amount of text an AI model can hold in working memory at one time. Everything it reasons about in a single session must fit inside this window. Bigger windows mean the model can handle longer documents or conversations without losing the thread.
Harness: The software framework, interface, and tooling built around an AI model to make it useful for real work. If the model is the engine, the harness is the rest of the car - transmission, steering, safety systems. Most real-world AI value comes from well-built harnesses, not just from better models.
Agentic / AI agents: AI systems that take autonomous actions - browsing the web, writing and running code, calling APIs, reading files - and chain those steps together to complete multi-part tasks with minimal hand-holding. A super-agent coordinates multiple sub-agents or tools to tackle complex workflows.
RAG (Retrieval-Augmented Generation): A technique that gives an AI model access to an external knowledge base at response time. The model retrieves relevant documents first, then uses that content to ground its answer - accurate responses about your specific data without retraining from scratch.
Scaffolding: The supporting structure built around a task before the detailed work is filled in - file structure, function signatures, placeholder logic. Good scaffolding lets smaller, cheaper models do solid work because much of the thinking has already been done.
Proof of concept (POC): A quick prototype built to test feasibility - not polished, not production-ready, just functional enough to answer “can we actually do this?” POCs validate an approach before you commit real resources to it.
Hallucination: When an AI model confidently generates something factually wrong or simply made up - an inherent property of how these models work, not a random glitch. Tasks requiring net-new content carry higher hallucination risk than tasks where you have already provided the source material.
Preprint: An academic paper posted publicly - typically on arXiv - before peer review. Preprints let researchers share findings fast, which matters enormously in a field moving as quickly as AI. Treat them as promising leads, not settled science.
arXiv (arxiv.org): The dominant preprint server for AI and machine learning research. Most major labs post papers here first. It is where you see what is happening in research now, not six months from now.
RSS (Really Simple Syndication): A standardized feed format that lets you subscribe to many sites at once and get new content automatically - a practical way to monitor dozens of blogs, news sites, or developer announcements without visiting each one.
Talkwalker / Brand24: Social listening and media monitoring platforms that track mentions, discussions, and news across the web and social channels in near real-time. They can ingest enormous volumes of content and make it available for automated processing.
Discord announcement channels: A Discord feature that lets servers designate specific channels as announcement channels. Other servers can follow them, so official posts appear automatically in your own server - high-signal release announcements without the noise.
Subreddit / Reddit community: A topic-specific forum on Reddit. Communities like r/LocalLLaMA are often the fastest signal for how practitioners actually receive a new model or tool - reactions, use cases, and bugs included.
ICLR, ICML, NeurIPS: The three most prominent academic conferences in machine learning - International Conference on Learning Representations, International Conference on Machine Learning, and Neural Information Processing Systems respectively. Major research from Google, Anthropic, Meta, and Alibaba is timed to these events, so paper volume spikes dramatically around them.
Qwen / Qwen 3.5: A family of open-weights AI models from Alibaba, available in multiple sizes. Smaller versions handle summarization well on modest hardware; larger versions are competitive with frontier proprietary models - a useful illustration of how capable open-weights models have become.
Claude Code: Anthropic’s AI-powered coding assistant and agentic development environment. It writes, edits, and runs code; manages files; and executes multi-step tasks with real autonomy. A prime example of a purpose-built harness.
Deerflow: An agentic super-agent tool from ByteDance - an orchestration system that chains multiple AI actions together to complete complex, multi-step tasks autonomously.
API (Application Programming Interface): A defined interface that lets one piece of software talk to another. When an automated pipeline calls a cloud AI model, it goes through an API. APIs are the connective tissue that makes agentic systems - ones that chain tools and data sources together - possible.
Part 1: Extraction - Where I Get My Information
Staying current on AI is the same challenge as any data problem. You need good inputs before you can produce good outputs. Part 1 is extraction: where do I actually get my information? The answer is a lot of places, because the AI landscape moves fast enough that a single source will always leave you with blind spots.
Discord: One Server, All the Signal
My first stop is my own Discord server - and that distinction matters. I have signed into and signed up for hundreds, actually no, it’s probably dozens, of Discord servers from all the major tech companies. If you join every AI vendor’s Discord community, what you get is hundreds or thousands of notifications an hour. That is not useful. It is just noise.
The smarter move is to use Discord’s announcement channel follow feature. Almost every major AI vendor - Anthropic, Google DeepMind, OpenAI, Mistral, and others - runs a Discord server with a dedicated announcements channel. Discord lets you follow those channels by adding them to your own server. So I set up my own server, followed all the announcement channels that matter, and now there is one centralized place where high-priority announcements land. My server’s news channel notifies me. Everything else stays muted.
This is the difference between letting information find you versus drowning in it. One server, one channel, the stuff that matters.
Reddit: Announcements Plus the Community Reaction
Reddit is my second daily stop, and it is valuable for a reason most people miss: you get not just the announcements, but the community’s reaction to the announcements.
When something happens, especially in AI, the community will have a conversation about it. And very often, the company itself is not the one revealing it in the the subreddits that you’re in. It’s other community members.
That second layer is often more useful than the announcement itself. When a new local model drops, r/LocalLLaMA lights up with people testing it in real conditions, surfacing edge cases, and debating whether the benchmarks hold up under practical use. That qualitative reaction tells you things a press release never will.
The communities I check most often include r/LocalLLaMA for the open-weights model ecosystem, r/unsloth for fine-tuning and optimization, and the vendor-specific subreddits like r/ClaudeAI and r/GoogleBard - the latter still carries the original name from before Google rebranded to Gemini, but the community is active and useful. Each community has its own character and its own signal-to-noise ratio. Taken together, they give me a cross-section of how the practitioner community is actually using and evaluating new developments.
And those evaluations usually are brutally honest, which is even more helpful because you get away from corporate boasting to the real world.
YouTube: The Big Announcements
YouTube is not a daily source for me the way Reddit is, but it is the right channel for high-profile announcements and longer-form technical content. Subscribing to the official channels of the major AI providers and a handful of key technical personalities means YouTube notifications tell me when something significant is happening.
Live streams for product launches, recorded conference sessions, technical walkthroughs - this is where the big moments get broadcast, and YouTube’s notification system does a reasonable job of surfacing them.
Conferences in particular are where YouTube is especially useful because many conferences like PositConf, for example, publish all their sessions. Microsoft Ignite, Google I/O publishes a ton of sessions, Apple’s WWDC publishes a ton of sessions, and that is stuff that you can download from YouTube and analyze in a tool like Notebook LM. So super valuable.
Social Media: Following the People Embedded in the Technology
Social media - across Threads, X, and other platforms - has become genuinely useful for following people who are not just adjacent to the technology but actually inside it.
Whether or not I like X or whether or not I like X’s crazy ass owner, the unfortunate reality is that a ton of AI practitioners are there. So if you want to stay on top of what’s happening in AI, you have to at least lurk on X.
The entities I pay attention to most? All the big tech companies around the world - Cohere in Canada, Alibaba in China, Black Forest Labs in Germany, and so many others. In terms of individuals, Logan Kilpatrick at Google and Boris Cherny at Anthropic, where he leads Claude Code. Boris in particular, as the head of Claude Code, shares a ton of behind the scenes stuff about what they’re working on at Anthropic and how they use their own tools internally, which is about the most valuable information possible.
The principle generalizes. Find the people who are embedded in the technical side of the organizations building this technology, and follow them wherever they are most active. Their casual observations frequently contain more signal than formal press releases.
RSS + Talkwalker + Brand24: The Automated Monitoring System
This is where my setup moves from manual curation into something more systematic. For Trust Insights, I built an automated monitoring system using Talkwalker and Brand24 as the data ingestion layer. These companies have crawlers and scrapers and API connections that I just can’t replicate. I just don’t have enough compute to build my own of them. So I use theirs. Feeding into an RSS reader that pulls from both platforms on an hourly basis.
The raw volume of AI news, discussion, and commentary that flows through these systems every hour is enormous - far too much to read manually. We’re talking five hundred articles or so an hour. So instead of reading it, I process it with some custom Python code. I built some Python code a while back that grabs all the data from RSS feeds, downloads it into a SQLite database, and then a small, lightweight language model - OpenAI’s gpt-oss-20B - analyzes and scores each item, prioritizing by relevance and significance. That scored data goes into a database, and I review the top items frequently.
This same system also feeds the weekly automated AI newsletter we produce at Trust Insights. The monitoring pipeline that keeps me informed is the same one that generates publication-ready content for our audience. One infrastructure, two uses.
The practical lesson here: staying current at scale requires automation. Reading everything manually is not a strategy. It is a bottleneck. You will drive yourself insane trying to read everything.
Slack Communities: Practitioner Networks
Several Slack communities round out my social monitoring. The Analytics for Marketers Slack, the Content Marketing Institute Slack, and the SmarterX Slack are all places where practitioners share what they are working on, what they are seeing, and what is breaking. The conversations in these communities tend to be more operational than strategic - people working through real implementation problems, not theorizing about the future.
I see things less frequently in these spaces compared to Reddit or my automated pipeline, but when something significant surfaces here, it tends to have already been stress-tested by people who have actually tried to use it Or someone and their their BS meter has at least vetted it, and the discussions tend to be richer.
Hugging Face: Open Weights Models
The next place I spent a lot of time on is Hugging Face, which is a repository for open weights AI models. There are orders of magnitude more open weights models than there are closed weights models. Closed weights models are ones like GPT-5.4 from OpenAI. You can use that model through a service like ChatGPT, but you cannot download that model.
On the other hand, a model like Qwen3.5 from Alibaba Quen is open. You can download it and if you have your own hardware that is capable of running it, you can run their model on your hardware completely free besides the cost of electricity.
Open weights models are to me where AI is going. So Hugging Face is a repository that allows you to see what new models have been posted. Hugging Face gets about 10,000 new models a day, give or take. But the big shops that are in the open weights space, like Cohere and Mistral and Alibaba and ByteDance, are all publishing their models on Hugging Face.
So one of my first stops when I’m doing any kind of AI news roundup is to see what’s been posted recently, particularly models that have been posted recently that get a lot of community attention.
LinkedIn: Good for Discussion, Not for News
LinkedIn deserves a specific callout because people often assume it is a good news source for the industry. It is not for me. LinkedIn’s algorithm surfaces what is relevant to you - which means it heavily weights content that has already gotten engagement. By the time something is trending on LinkedIn, it is old news in AI terms.
When it comes to AI specifically, I care about recency far more than relevance. The field moves fast enough that a two-day-old announcement can already be superseded. LinkedIn’s relevance-weighted feed is structurally misaligned with that need.
That said, LinkedIn is a genuinely good place for discussion and analysis. When something important has already happened, the LinkedIn conversation around it is often rich and substantive. I use it for that purpose - analysis and perspective, not breaking news.
arXiv: The Academic Research Layer
A few times a month, I check arXiv - arxiv.org - which is where academic preprints from AI researchers get published. These are papers submitted but not yet formally peer-reviewed, so you are getting results ahead of the official publication timeline. For the AI field, this matters enormously, because the gap between preprint and publication can be months or longer. And let’s face it, in AI, months between a preprint and a final paper may as well be centuries.
My filtering approach is straightforward: I look for email addresses from the major labs in the author affiliations. Papers authored by researchers at Google, Anthropic, Meta, Alibaba, Deepseek, LinkedIn, and similar organizations are worth my attention - those are typically the employees of the labs publishing their most significant results. I am not reading every paper - there are thousands - but the lab affiliation filter cuts the noise dramatically.
arXiv is a semi-weekly review for me rather than a daily one, and it is more of a longer-range early warning system than a source of immediately actionable news. When a technique starts showing up repeatedly in arXiv preprints from the major labs, you can often predict where the field is heading three to six months out.
Major Conferences: The Annual Paper Floods
Those arXiv submission spikes I mentioned have a pattern: they cluster around conferences. Several times a year, the major AI research conferences - ICLR, ICML, and NeurIPS are the big three - become the most information-dense moments in the calendar. These events coincide with the release of thousands of papers simultaneously, as researchers time their work to align with the conference schedule.
I treat conference season as a dedicated research sprint. The volume is too high to absorb casually, so I specifically carve out time to review the most significant papers and track what themes are emerging across submissions. The aggregate picture across a conference’s paper slate often tells you more about where the field is headed than any individual paper.
That said, I’ll often download and put all the papers from a conference into systems like NotebookLM to review at scale. I have some custom code to download the thousands of papers at a time.
An example of a significant paper at NeurIPS last November? The Alibaba Qwen team released a paper showing how a single mathematical change to an AI model dramatically increases its performance while not costing any more compute. That paper won a a gold medal at the conference, and it makes you wonder like why why was that discovery made just now? Why didn’t we know that to begin with? So that’s the kind of thing I look for.
Developer and Tech Blogs: Automated via RSS
Finally, the official developer blogs and technical blogs of the major AI providers all run through my RSS setup. When Google, Anthropic, OpenAI, Mistral, or any of the other significant players publish a technical post, it lands in my feed automatically. This is fully automated - I set up the RSS subscriptions once and the content comes to me.
Developer blogs sit at a different layer than press releases or social media announcements. They tend to be more technically detailed, written for practitioners, and more honest about the specifics of what has actually changed in a model or platform. For understanding the mechanics of what providers are building and why, these posts are essential reading.
Taken together, these sources give me something no single source could: breadth across platforms, depth through the academic and technical layers, community reaction in near real-time, and automation to handle the volume. The system is not elegant in a minimalist sense - it is a lot of sources. But AI development happens across all these channels simultaneously, and missing any one of them means missing something real. And the channels are uneven in terms of their distribution. Sometimes one set of channels will catch something that others miss, that others don’t cover. So having this big picture coverage helps capture the space as a whole.
This is the extraction phase of the ETL pipeline in action: an active, designed system for pulling information from the right places at the right frequency. Once that information is in hand, the next step is figuring out what it all means - which is what the transformation phase is about.
Part 2: Transformation - Making Sense of What I Find
Once all that raw data lands in my lap the next question is obvious: what do I do with it? Information without a framework is just noise. And given how fast AI moves, noise at scale will bury you.
The answer is what I think of as the transformation phase: taking everything I have gathered and running it through a mental framework that lets me quickly categorize, prioritize, and understand what I am looking at. This framework has three parts, and I think about all of AI through this lens.
Every piece of AI news I encounter, I ask myself one question first: is this a new model, a new harness, or a new application? That single question cuts through enormous amounts of noise very quickly. Let me explain what each of those means.
Imagine this is like a car. Cars have engines, cars have bodies and steering wheels and seats and air conditioning, and you drive your car places. That’s mostly the reason we have cars. In AI speak, a model is the engine of the car. A harness is the rest of the car, and the application is where we go with the car.
Models - The Engines
Models are the foundation. They are the actual AI systems - the large language models, the image generators, the audio and video generators - that power everything else. When Anthropic releases a new Claude, when Google drops a new Gemini, when Alibaba puts out a new version of Qwen, that is a model update.
I pay attention to models. Models are getting harder and harder to differentiate based on performance. The top-tier models are converging. After a certain point, it is genuinely difficult to tell them apart in everyday tasks because they are all very, very good. That is actually a sign of a maturing technology landscape, and it means the decision of which model to use for a given task has gotten more nuanced.
Not every task needs the biggest, most expensive, most powerful model available. The size of the model you need depends heavily on what you are asking it to do and how much data you are feeding it.
Take summarization. If I am summarizing a document, I do not need a top-tier frontier model. I can use something like Qwen 3.5 9B - a model so small it can run on some mobile phones - and get perfectly good results. Why? Because summarization is a transformation task. The source material is all right there. The model’s job is to compress and reorganize information that already exists, not to conjure new information from scratch. That is not a heavy cognitive lift, computationally speaking.
The general rule: the more data you are providing, the smaller the model you can get away with. You are applying transformations on the data rather than having it generate net-new content. Small models handle transformation well. That means paying attention to places like Hugging Face that release lots of small models is important so you can see how quickly small models are advancing.
Flip that around. When you have to create something net-new - something the model has to generate largely from its own knowledge - and the risk of getting it wrong is high, that is when you need the biggest, most capable models available. The classic example is writing code. If you are creating code net-new from a spec, you want a very smart model. The model has to reason through architecture, anticipate failure modes, and produce something that actually works without much scaffolding to lean on. Cheap out on the model here and you pay for it in debugging time.
But here is where it gets interesting. If you have already written out the scaffolding - if the structure of the code is established, the quality checks are in place, and you are asking the model to fill in the implementation details - you can use something much smaller. A model like Qwen 3.5 35B can handle that work because it does not have to guess very much. A lot of the hard decisions are already made. It will take more iterations than an expensive model would, but if each iteration costs orders of magnitude less, you come out ahead on both time and money.
This is how I think about model selection. Not “which model is best” in some abstract ranking, but “what is this task actually asking the model to do, and how much am I providing versus how much is it generating?”
I also watch the video and audio model space closely, though I categorize that separately in my thinking because it is still developing in ways the text model space has already moved past. The north star for video generation - the capability that would signal true maturity - is generating a feature-length film. We are not there yet, not even close. Right now most tools can generate somewhere between eight and thirty seconds of video at a time. That is actually more useful than it sounds: if you count the number of jump cuts in a film or a commercial, you will find that some shots last two seconds or less. We can work with short clips.
The real challenge is not duration. It is consistency - maintaining the same characters, lighting, and visual style from shot to shot. That is what breaks the illusion. Some tools have started addressing this with reference image support, which helps lock in visual continuity across cuts. It is not solved, but it is getting better in ways that matter. I watch this space specifically for breakthroughs in consistency, because that is when video generation becomes practically useful for real production work.
Harnesses - The Rest of the Car
This is where I put the most weight right now - by a significant margin.
Think of it this way: the model is the engine. The harness is the rest of the car. The engine matters - a bad engine makes for a bad car - but the engine alone gets you nowhere. Nobody drives down the street sitting on an engine block. You drive down the street in a car. You need steering, suspension, transmission, brakes. You need all the systems that translate raw power into controlled, directed motion. That is the harness.
In AI terms, the harness is everything built around the model: the interfaces, frameworks, orchestration layers, tools, and scaffolding that determine how you actually interact with the model and what you can get it to do. The harness is what makes the difference between a capable model and a capable system.
Here is why this matters so much right now. We are in a moment where the models themselves are very good - good enough for most practical purposes - but the harnesses are where active, rapid innovation is happening. For a lot of people, the harness they are used to is a chat interface like Chat GPT or Gemini or Claude. And there’s nothing wrong with those interfaces. They’re terrific. They do a good job of providing a chat companion to talk to, to talk to your AI models.
That is not where the state of the art in AI is. If you are only using models within chat harnesses, you are leaving a lot of power on the table. A mediocre harness around a great model will underperform a great harness around a merely-good model. I have seen this firsthand repeatedly.
The system that matter most to me right now: Claude Code and Claude Cowork from Anthropic, Google Antigravity, OpenAI Codex, and OpenClaw. These are agentic environments - harnesses that let models not just answer questions but actually do work. Write code, run it, check the output, fix errors, iterate. And it’s not just coding. I have written, as you know, if you read the January issue, I have written a trashy romance novel with these tools. I have written corporate strategy with these tools. The agentic harness is so powerful. And it’s something that is only gonna get more capable as time goes on, even this year.
The difference between using a raw model via a chat interface and using one of these harnesses is roughly analogous to the difference between having a conversation with a contractor and actually watching them build the thing.
Anthropic in particular is making significant strides with Claude Code and Claude Cowork. The pace of capability development in those tools is fast enough that I am paying closer attention to harness releases right now than I am to raw model releases. When Claude Cowork launched, it required a complete shift in how I thought about certain workflows. Same with Google Antigravity, same with Codex. These are the moments that change what is actually possible in practice.
When a harness update drops, my first question is: how does this compare to what I am already using? I have existing harnesses I can benchmark against - Claude Code, Qwen Code, and others. I run new entrants through the same tests and see how they stack up. The bar is moving fast, which is part of why I watch this category so closely.
Applications - Where You Drive
The third category is applications: the actual use cases, the practical problems being solved, the things the engine and harness are being pointed at.
I use AI extensively to discover and generate application ideas. AI is actually quite good at brainstorming ways to apply existing capabilities to business problems. Sometimes you have to remind AI that it’s capable of those things, but it can do them.
But here is how I think about it in practice: I am not usually looking for net-novel applications. I am looking to map known models and harnesses to known problems and solve those problems better.
Most of the time, the application is not the innovation. The underlying problem is already understood. What has changed is the technology available to address it. The goal is to find the better solution, not to invent a new category of problem.
A useful way to think about it: when I hear about a new agentic application someone has built, my first reaction is not usually admiration. It is more like - I’m pretty sure I can do that too. Then I go test it. That attitude keeps me focused on practical capability rather than getting dazzled by demos.
Putting the Framework to Work
The framework is not complicated, but it is more powerful than it looks. When new information comes in - a press release, a Reddit post, a paper, a product launch - the first thing I do is run it through the classification: model, harness, or application?
Model update: I go test it. I have a set of diagnostics, my own prompts, that tell me how good a model is at specific tasks I actually care about. Generic benchmarks are fine as a starting point, but they do not tell me what I need to know.
Harness update: I benchmark it against what I am already using. How does it compare to Claude Code? What does it do better, what does it do worse, what does it not do at all?
Application: I figure out whether I can implement it with my existing toolkit. If I can, I build it. If I cannot, I figure out what is missing and whether it is worth pursuing.
This three-part classification is genuinely useful because it tells me immediately what kind of attention something deserves and what the appropriate response is. It turns a flood of incoming information into a manageable set of questions, and it keeps me focused on what actually matters: not what is new, but what is useful.
Part 3: Loading - Putting Knowledge to Work
All the reading and synthesizing means nothing if you never act on it.
There is a great scene from an old episode of The Flash that I keep coming back to. The villain catches a knife and throws it at the Flash. And the Flash, without breaking a sweat, plucks it right out of the air and says, “I can do that too.”
That is exactly how I look at major AI announcements. When somebody comes up and says, “We built this new cool agentic application,” my reaction is: I am pretty sure I can do that too. And then I go test it.
My technology folder on my computer is filled with hundreds of prototypes and proofs of concept. Each one is me asking that same question and working out the answer. Not all of them become production tools. That is not the point. The point is developing the capability, understanding the edges of what is possible, and building my own working knowledge of how these systems actually behave when you push them.
Taking a Good Idea and Making It Better
Sometimes the answer is not just “yes, I can do that too” - it is “yes, and I can do it better.”
I was playing around recently with ByteDance’s Deerflow, which is an agentic super-agent, similar in concept to OpenClaw. It is impressive work. But I found a number of things I personally did not like about how it functioned. So I had Claude Code rebuild it to function the way I want it to function. The core idea was sound. The implementation did not match my standards. So I made my own version.
This is the real power of being deeply familiar with the tools. You do not have to accept someone else’s design decisions. If you understand the underlying technology well enough, you can take a good idea that has been poorly implemented and build your own version that actually fits your workflow and your point of view.
Real Infrastructure: The Adobe Analytics QA Suite
Let me give you a concrete example of what this looks like when it moves beyond a prototype into genuine business infrastructure.
One of our clients is an Adobe Analytics shop. If you have ever worked with Adobe Analytics, you know it is a pain to work with. It is very difficult to export configurations and get a system-level view of what is actually going on inside your implementation. That makes quality assurance a real problem. You can buy commercial QA tools that automate some of the testing - but those tools run into the thousands of dollars a month. That is real money, and it is money our client probably would not pay for a standalone tool.
So my thinking was: can I build my own QA suite that does this? I know the tools can do it. The question is whether I can implement it well enough to be genuinely useful.
I went through my usual process. I worked out the specs. I built the system. I worked out all the bugs - it took a couple of days. What I ended up with is a piece of technology that automates the testing, audits configurations, and generates structured data as output. There is no AI embedded directly in the QA suite itself. But that output goes into an agentic AI system, which uses it to diagnose problems and even make repairs where the appropriate APIs exist.
Now I have a piece of infrastructure I can offer to any client that runs Adobe Analytics. That is the compounding effect of building your own tools instead of renting someone else’s — and it is why the applications layer is where I focus most of my creative energy. If you pay attention to the three-part system and you filter news as it comes in and you know what problems you’re working on, you know how to slot a given tool or announcement into your system to make your system better immediately.
Part 4: Wrapping Up
The system I have described is, at its core, an ETL pipeline - the same process data engineers use to move information from raw sources into something useful.
Simple to describe. Harder to sustain.
The challenge for anyone who wants to adopt something like this: the cold start problem is real. AI technology moves at a pace that is genuinely unlike any other field I have worked in. If you stop paying attention for more than a couple of weeks, major shifts can happen that require you to completely reorder your thinking about what is possible. I am not exaggerating. When Claude Code came out, it changed how I think about agentic workflows. When Claude Cowork came out, when Google Antigravity came out, when OpenAI Codex came out, when OpenClaw came out - each one of those was a complete mind shift. Not an incremental update. A fundamental rethinking.
So if you only check in once a month, you are basically starting from scratch every single time. You are not building on accumulated knowledge. You are just playing catch-up, never quite getting your footing before the ground shifts again. My practice is to stay current on a daily basis rather than devoting days at the end of a month to catch up on everything - or catching up at a surface level and never really understanding what changed or why it matters.
That daily practice is why I can do what I do.
Ann Handley - one of the sharpest minds in marketing - once told me something back in my analytics days that has stuck with me ever since. “Lots of people know the why and the what. None of those people know the how.” That is my specialty. It has always been my specialty. And it remains my specialty today - how to do the thing.
There are countless numbers of people on LinkedIn and at conferences and things that are navel gazing and speculating and using all sorts of expensive words that sound impressive, but when you talk to them and say, How do I do this thing? you don’t typically get great answers.
As new technological advances come out, I focus on how the technologies actually work and how to apply them in practice. Not the theory. Not the hype. The how. But that requires the two to four hours a day I spend just trying things out - seeing what works, what breaks, what is real and what is marketing. You cannot understand how something works if you only read about it. You have to use it.
Much in the same way that you can’t really understand a dish until you cook it and eat it. You can’t understand it just from reading the recipe.
This is my system. It is not the system. I am not standing up here telling you that everyone should do it this way. I do it this way because I genuinely love this stuff. Every day I get new tools to explore, new models to test, new problems to see if I can solve faster than I could yesterday. It is like getting new toys every single day. For me, the investment of time is not a burden - it is the point.
That level of commitment is appropriate for me and my work. It may not be appropriate for you.
So here is the closing question worth sitting with: what level of engagement makes sense for your role, your goals, and your interests? Because the Extract, Transform, Load framework does not require two to four hours a day to be useful. If you have thirty minutes a week, you can still apply it. Extract what matters from one or two good sources. Transform it through your own professional lens - what does this mean for your industry, your team, your specific problems? Load it by trying one new thing, however small.
That is the homework from this newsletter for you. Try one new thing a week, either at the model level, at the harness level, or the application level. And if you do that, I promise you your capabilities and your understanding of AI will grow by leaps and bounds.
The framework scales. You do not need to do what I do to benefit from the underlying logic. Start where you are, with the time you actually have, and let the system grow with your curiosity.
How Was This Issue?
Rate this week’s newsletter issue with a single click/tap. Your feedback over time helps me figure out what content to create for you.
Here’s The Unsubscribe
It took me a while to find a convenient way to link it up, but here’s how to get to the unsubscribe.

If you don’t see anything, here’s the text link to copy and paste:
https://almosttimely.substack.com/action/disable_email
Share With a Friend or Colleague
Please share this newsletter with two other people.
Send this URL to your friends/colleagues:
https://www.christopherspenn.com/newsletter
For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here.
ICYMI: In Case You Missed It
Here’s content from the last week in case things fell through the cracks:
Jack Dorsey Cuts 4,000 Jobs: AI Is Replacing Workers Faster Than Anyone Expected
Beyond Silicon Valley: 25+ AI Model Makers Shaping the Future of Artificial Intelligence
How to Save Big Money on Claude Code by Switching Models (Without Killing Your Workflow)
Who Owns Your AI Prompts? The Work-for-Hire Clause You Need to Understand
Why AI’s Critical Thinking Skills Are Actually Superior to Yours (If You Know How to Use Them)
Almost Timely News: 🗞️ How to Use Generative AI For Retail Analytics (2026-03-01)
In-Ear Insights: Switching AI Providers, Backup AI Capabilities
On The Tubes
Here’s what debuted on my YouTube channel this week:
Skill Up With Classes
These are just a few of the classes I have available over at the Trust Insights website that you can take.
Premium
Free
👉 New! From Text to Video in Seconds, a session on AI video generation!
Never Think Alone: How AI Has Changed Marketing Forever (AMA 2025)
Powering Up Your LinkedIn Profile (For Job Hunters) 2023 Edition
Building the Data-Driven, AI-Powered Customer Journey for Retail and Ecommerce, 2024 Edition
The Marketing Singularity: How Generative AI Means the End of Marketing As We Knew It
Advertisement: New AI Book!
In Almost Timeless, generative AI expert Christopher Penn provides the definitive playbook. Drawing on 18 months of in-the-trenches work and insights from thousands of real-world questions, Penn distills the noise into 48 foundational principles—durable mental models that give you a more permanent, strategic understanding of this transformative technology.
In this book, you will learn to:
Master the Machine: Finally understand why AI acts like a “brilliant but forgetful intern” and turn its quirks into your greatest strength.
Deploy the Playbook: Move from theory to practice with frameworks for driving real, measurable business value with AI.
Secure Your Human Advantage: Discover why your creativity, judgment, and ethics are more valuable than ever—and how to leverage them to win.
Stop feeling overwhelmed. Start leading with confidence. By the time you finish Almost Timeless, you won’t just know what to do; you will understand why you are doing it. And in an age of constant change, that understanding is the only real competitive advantage.
👉 Order your copy of Almost Timeless: 48 Foundation Principles of Generative AI today!
Get Back To Work!
Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you’re looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.
Advertisement: New GEO 101 Course
When I talk to folks like you, being recommended by AI is one of your top marketing concerns in 2026.
We’ve taken everything we’ve learned from OpenAI’s documentation, Google’s technical papers, patents, sample code, plus our years of experience in generative AI to assemble a high-impact 90-minute course on GEO 101 for Marketers.
In this course, you’ll learn:
The three distinct phases of GEO and how they work
How to optimize for each phase (they’re different!)
How to measure your GEO efforts in a meaningful and valid way
This course is meant to be used. In addition to the course itself, you’ll also receive:
Your 90 day GEO action plan
How to set up Google Analytics for measuring GEO traffic
How to join Google Search Console data with GEO intent data
How to use our free AIView tool to improve your content and site for one of the three phases of GEO
A certificate of completion from TrustInsights.ai
And best of all, this is our most affordable course yet. GEO 101 for Marketers is USD 99 and is available today.
👉 Enroll here in GEO 101 for Marketers!
How to Stay in Touch
Let’s make sure we’re connected in the places it suits you best. Here’s where you can find different content:
My blog - daily videos, blog posts, and podcast episodes
My YouTube channel - daily videos, conference talks, and all things video
My company, Trust Insights - marketing analytics help
My podcast, Marketing over Coffee - weekly episodes of what’s worth noting in marketing
My second podcast, In-Ear Insights - the Trust Insights weekly podcast focused on data and analytics
On Bluesky - random personal stuff and chaos
On LinkedIn - daily videos and news
On Instagram - personal photos and travels
My free Slack discussion forum, Analytics for Marketers - open conversations about marketing and analytics
Listen to my theme song as a new single:
Advertisement: Ukraine 🇺🇦 Humanitarian Fund
The war to free Ukraine continues. If you’d like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia’s illegal invasion needs your ongoing support.
👉 Donate today to the Ukraine Humanitarian Relief Fund »
Events I’ll Be At
Here are the public events where I’m speaking and attending. Say hi if you’re at an event also:
SSI, Charlotte, April 2026
The Trust Insights Generative AI Workshop, sometime this spring!
SMPS AI Conference, November 2026
There are also private events that aren’t open to the public.
If you’re an event organizer, let me help your event shine. Visit my speaking page for more details.
Can’t be at an event? Stop by my private Slack group instead, Analytics for Marketers.
Required Disclosures
Events with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them.
Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them.
My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well.
Thank You
Thanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness.
Please share this newsletter with two other people.
See you next week,
Christopher S. Penn




I saved this edition in my inbox, and I'm marking it here. I like the tactical aspect, sure, but it really demonstrates how to come at a problem and set up a system to address it. Thank you, CP, for all you do keep us littles moving forward.