Discover more from Almost Timely Newsletter
Almost Timely News: The Greatest Unaddressed AI Challenge (2023-11-05)
Almost Timely News: The Greatest Unaddressed AI Challenge (2023-11-05) :: View in Browser
If you're reading this not on LinkedIn, we've successfully moved to Substack. That means that the giant unsubscribe button is no more, replaced by Substack's subscription management.
Content Authenticity Statement
100% of this newsletter was generated by me, the human. No AI was used to generate any part of this issue. Learn why this kind of disclosure is important.
Watch This Newsletter On YouTube 📺
What's On My Mind: The Greatest Unaddressed AI Challenge
Over the past week, a lot has happened in the world of artificial intelligence, particularly on the regulation of it. We had the President of the USA's executive order as well as the Bletchley Park declaration, a general "this is how we want AI to be used" statement signed by 27 nations.
Here's the thing about both these documents, especially the Executive Order. They're largely unenforceable wish lists. They focus on general ideas around AI but have no enforcement for the private sector where harms are most likely to occur. And that's understandable; creating those regulations in the USA requires legislative action. The way the USA government works, an Executive Order really only applies to the Executive Branch of the government. Anything that legislates the broader citizenry has to come from the Legislative Branch, which is the US Congress.
But beyond the machinations of the political machinery, the reality is that artificial intelligence is moving fast. Very, very fast. So fast that even an Executive Order crafted less than a week ago has some glaring holes in it that will restrict only the most obvious problems. For example, there's a section on misuse of AI that states:
I hereby direct the Secretary of Commerce, within 90 days of the date of this order, to: (iii) Determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate. Until the Secretary makes such a determination, a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 10^26 integer or floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 10^20 integer or floating point operations per second for training AI.
This is based on a very specific view of how AI models work, a view that was true a year ago but is no longer true now. A malicious actor isn't going to use a big, open model like GPT-4 to do bad things. They're going to use a network of small models that runs on commodity infrastructure - like your laptop - or a distributed network to do that, probably using an agent network like LangChain or AutoGen or AutoGPT. This has been the case for a few months already.
More important, this is attempting to regulate the technology itself. The cat is out of the bag, the toothpaste is out of the tube, the ship has sailed on regulating the technology itself. Even if you were to claw back every major provider of AI in the big named players in the USA, there are thousands of models out there, some produced by sovereign nations like the United Arab Emirates' Falcon model, GLM-130 from Tsinghua University in China, and many others. There's no turning back the clock on the technology.
That means you have to regulate and moderate the outcomes itself. On the topic of criminal activity, that's pretty straightforward: fairly and aggressively enforce existing laws. Committing fraud with AI is still committing fraud. Impersonating someone for malicious purposes is still impersonation, whether you have a well-trained actor or an AI. If you wanted to discourage some of the misuses of AI, add a multiplier on sentencing - if you use AI to do a bad thing, you get punishment plus extra punishment.
For things where there isn't necessarily a crime, but substantial potential misuse, like the ability for language models to de-anonymize people solely based on their language (a paper that was released a few days ago documents this, but it has not been peer-reviewed yet, so take it with a grain of salt), that bears watching and monitoring and independently testing. Those datasets that such an ecosystem produces, you still can't regulate because just having them isn't necessarily a violation of law (depending on your jurisdiction), but what a bad actor does with them will be the violation of law. For example, if an unethical insurance company were to ingest social media data, de-identify it, and try to match it to policyholders and then discriminate against policyholders based on protected classes, that's when the law gets broken and we can take action. Until then, we have to test things to determine how much harm they could create, and how easy or hard it is to do those things.
These are all things to pay attention to, but the biggest challenge, the biggest problem, isn't one technology can fix. Technology is the source of the problem in that enables the problem, but we can't use technology to prevent it. It's the blurring of reality. You've almost certainly seen one or more videos produced by companies like HeyGen in which a person's likeness is trained and then used to synthesize that person saying something they've never actually said. HeyGen's capabilities are based on code that's available broadly as open-source software (they've done a really nice job tuning it and putting a friendly user interface on it, but the engine behind it is publicly available).
When people are presented with content now, one of the questions we have to ask is whether or not that content was generated by the party it's attributed to. For example, if you see a video of a politician you agree with - or disagree with - saying something, it's now a valid reaction to ask whether or not that politician actually said that. This is made more complicated by the fact that in our hyper-polarized world, we tend to believe things we agree with, even if they're factually incorrect.
The antidote to this is difficult: critical thinking. Detective work. A willingness to not believe something just because you want to believe it and instead doing a little investigation to find out whether or not it's true. Asking for sources and then following up to validate those sources. Questioning authority. Developing personal networks of trustworthy experts. And most challenging of all, a willingness to update your memory and change your beliefs when presented with proven evidence to the contrary.
For example, you've probably heard that Vitamin C is good for preventing colds, right? That's certainly emphasized enough in commercial messaging. Except... it's not true. For the vast majority of the population, Vitamin C is ineffective at preventing colds, and exerts only a modest effect on cold symptoms once ill.
Does this change your thinking? Does this change your beliefs? Does this change your behaviors? It should, if the issue is important to you. It changed my beliefs and what behaviors I take - with this, there's no need to purchase any kind of Vitamin C supplementation. I can and do still consume citrus fruit, but that's just because I enjoy citrus fruit. I don't need to go out of my way to overload on Vitamin C for this particular use case.
This is a change in how we think. Instead of being passive consumers of content and information, we and everyone we care about need to make the change to being active questioners of content. For parents, think about teaching your kids to emulate characters like spies and detectives, professionals whose job it is to discern truth from falsehood, and verify their findings.
Challenge yourself. Using commonly available tools, take a point that you know to be true and use generative AI to create a convincing alternative and see how far you get in creating something false. (then please responsibly destroy your work or at least slap giant disclaimers all over it) See how easy or hard it is to manufacture something, because one of the best inoculants for misinformation is seeing how the trick is done. Once you know the magic trick, it loses a good amount of its impact, and it opens your mind to asking questions. Is that really a video of Taylor Swift or Volodymyr Zelenskiy, or did someone else manufacture that? Are there any tells you can look for that would give away whether it was real or fake?
Stopping AI-generated misinformation isn't a technology problem, and it won't have a good technology solution. Today's AI detection tools are often no better than a coin toss and have an alarmingly high false positive rate. No, the issue is a people problem, and that requires a people solution.
For companies, brands, and people, one of the most important things you can do is establish a conduit of authenticity. Make it easy for someone to reach you and validate that you said something or not. Transparency is the currency of trust, so disclose the use of AI wherever you use it so that your customers know you can be trusted to do so - and when something inevitably happens where misinformation is generated purporting to be you, they have a long record of trustworthy interactions with you to help reinforce your claims that the misinformation is not from you. Build a strong community so that you have an army of defenders to help debunk when misinformation about you occurs. And most important, in all your interactions, build a reputation for being trustworthy so that it makes it easier to discern when something is clearly amiss.
The line between fiction and reality gets more blurry by the day with technological tools, but the underpinnings of trust remain the same. There's a lot you can do today to inoculate yourself against misinformation and inoculate your audience as well. We each have to do our part to solve for the people problem that AI technology enables when it makes it easier to blur the line between reality and fiction.
How Was This Issue?
Rate this week's newsletter issue with a single click. Your feedback over time helps me figure out what content to create for you.
Share With a Friend or Colleague
If you enjoy this newsletter and want to share it with a friend/colleague, please do. Send this URL to your friend/colleague:
Or use this button.:
For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here.
ICYMI: In Case You Missed it
Besides the newly-refreshed Google Analytics 4 course I'm relentlessly promoting (sorry not sorry), I recommend the piece on Google's thinking about AI-generated content if you're doing SEO work.
Skill Up With Classes
These are just a few of the classes I have available over at the Trust Insights website that you can take.
Advertisement: Bring My AI Talk To Your Company
I've been lecturing a lot on large language models and generative AI (think ChatGPT) lately, and inevitably, there's far more material than time permits at a regular conference keynote. There's a lot more value to be unlocked - and that value can be unlocked by bringing me in to speak at your company. In a customized version of my AI keynote talk, delivered either in-person or virtually, we'll cover all the high points of the talk, but specific to your industry, and critically, offer a ton of time to answer your specific questions that you might not feel comfortable asking in a public forum.
Here's what one participant said after a working session at one of the world's biggest consulting firms:
"No kidding, this was the best hour of learning or knowledge-sharing I've had in my years at the Firm. Chris' expertise and context-setting was super-thought provoking and perfectly delivered. I was side-slacking teammates throughout the session to share insights and ideas. Very energizing and highly practical! Thanks so much for putting it together!"
Get Back to Work
Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you're looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.
What I'm Reading: Your Stuff
Let's look at the most interesting content from around the web on topics you care about, some of which you might have even written.
Social Media Marketing
Media and Content
SEO, Google, and Paid Media
Advertisement: Business Cameos
If you're familiar with the Cameo system - where people hire well-known folks for short video clips - then you'll totally get Thinkers One. Created by my friend Mitch Joel, Thinkers One lets you connect with the biggest thinkers for short videos on topics you care about. I've got a whole slew of Thinkers One Cameo-style topics for video clips you can use at internal company meetings, events, or even just for yourself. Want me to tell your boss that you need to be paying attention to generative AI right now?
Tools, Machine Learning, and AI
Analytics, Stats, and Data Science
All Things IBM
Dealer's Choice : Random Stuff
How to Stay in Touch
Let's make sure we're connected in the places it suits you best. Here's where you can find different content:
My blog - daily videos, blog posts, and podcast episodes
My YouTube channel - daily videos, conference talks, and all things video
My company, Trust Insights - marketing analytics help
My podcast, Marketing over Coffee - weekly episodes of what's worth noting in marketing
My second podcast, In-Ear Insights - the Trust Insights weekly podcast focused on data and analytics
On Threads - random personal stuff and chaos
On LinkedIn - daily videos and news
On Instagram - personal photos and travels
My free Slack discussion forum, Analytics for Marketers - open conversations about marketing and analytics
Advertisement: Ukraine 🇺🇦 Humanitarian Fund
The war to free Ukraine continues. If you'd like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia's illegal invasion needs our ongoing support.
Events I'll Be At
Here's where I'm speaking and attending. Say hi if you're at an event also:
LPA, Boston, November 2023
Social Media Marketing World, San Diego, February 2024
MAICON, Cleveland, September 2024
Events marked with a physical location may become virtual if conditions and safety warrant it.
If you're an event organizer, let me help your event shine. Visit my speaking page for more details.
Can't be at an event? Stop by my private Slack group instead, Analytics for Marketers.
Events with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them.
Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them.
My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well.
Thanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness.
See you next week,
Christopher S. Penn