Everyone knows predictions are difficult—especially about the future.
But despite me knowing that, I’m about to tell you—with a lot of confidence—some of the major developments that are about to happen in AI.
The trick, however, is that I’m going to do this stochastically, and my promise to you is that once you hear them and let them sit for a bit, they’ll be as obvious to you as they are to me.
❝
It’s almost impossible to predict exactly what a given tech will look like, who will make it, or when it’ll happen.
That $37 word I used—stochastic—is one of my favorites. It basically means random at any given point, but with a predictable destination.
My favorite example of something stochastic is a drunk guy stumbling home from the bar. Every step he takes might as well be a pseudo-random generator.
You could use all the supercomputers on Earth and not be able to predict exactly where he’ll step. But if you zoom out and add time, he’ll probably end up at home.
I think the future of AI is very similar.
❝
Futurists tend to look at the current tech and try to extrapolate. Don’t be a futurist.
The main thing I’m trying to convince you of is that understanding where tech is going doesn’t come from understanding tech at all. Tech isn’t predictable.
But humans are.
We can predict the drunk person’s destination because we know they’re going home. So that leaves the question—what are humans stumbling towards when it comes to tech?
And in that regard, I think humans are remarkably predictable.
We want to be safe
We want to thrive
And we want to connect and be wanted
Which I think we can break into something like these (STC).
Safety/Security
Success/Thriving
Social/Connection
Of course, if you’re one of the lucky among us, you might also value some higher pursuits, such as exploring the world around us, making a positive impact on the world, and generally helping others. But honestly I think we can bucket that into Thriving and/or Mating—depending on the person.
❝
The most predictable thing in the world is the human desire to be safe, successful, and wanted.
Anyway, the exact buckets don’t matter that much. The point is that most of our lives involve consciously and subconsciously looking for better ways to do these three things. And I believe this is all you need to predict where tech is going.
In other words, it’s not about the tech; it’s about what humans want from tech.
So, with all that throat clearing out of the way, here’s what I see coming in the world of AI. Sidenote: I captured a number of these in my essay-turned-book The Real Internet of Things, in 2016. I wish I could recommend it strongly, but the only thing it’s really good for is showing that I’ve been thinking about this for a long time. Worth a read if you’re into a 2016 view of these same concepts.
Getting to the predictions, a number of the things I’ll cover here are just starting, some have already kicked off and are moving along, and some are still a bit in the distance. Here’s the list, which we’ll take one at a time.
Digital Assistants
Continuous Personal Monitoring
Everything Gets an API
Everything Gets a (Local) Daemon
Multiple Role-based Digital Assistants
Let’s look at the first one, which is the center of the entire ecosystem.
Nobody knows exactly how personal AI’s will end up on our mobile devices, but I’m guessing it’ll be some combination of:
Via our mobile OS AI’s, e.g., Siri, Gemini/Assistant, and whatever Satya calls Microsoft’s new one
The mobile platforms will be required to put third-party Digital Assistants on their devices with equal access
But whether it’s the native ones from the OS, or some combination due to anti-trust and competition, what’s important is that these AIs will know absolutely everything about us.
They’ll have our health data because that’ll be part of OS integration
Our finances and other sensitive personal data because those will be easy API tie-ins
They’ll have our journals & diaries because wherever we store that stuff, we’ll give the AI access
They’ll know our pasts, our traumas, and our hangups because knowing those will help our Digital Assistant be a better advocate for us
They’ll know our likes and dislikes for foods, conversation topics, sexual preferences/kinks, books and movies, and everything else because—once again—they’ll make the DA a better assistant and, um, friend
At this point you might be thinking, “Wait, hold on, but how do they get all this in the first place?” And the answer is simple.
We’ll give it to them.
The functionality will be so good—and so useful—that it’ll be 1000% worth the tradeoff of privacy (until that DA gets hacked. More on that later).
❝
We’ll give our DAs all our data because it will enable them to be the perfect assistant, therapist, and friend.
Think about how lonely people are. How isolated they are. Imagine a system that knows everything about you. It can be your therapist, your coach, your best girlfriend, your best boyfriend, your best confidant.
💡In reality, a number of these will be broken up into separate DAs with separate personalities so that the AI that flirts with you to improve your confidence isn’t also your therapist and accountant. We have a whole section on this later.
The center of the coming AI ecosystem is one primary—but multiple secondary—Digital Assistants, powered by the latest AI, that know absolutely everything about you.
That’s the first piece of this: Digital Assistants that know everything about us. Now for the way they’ll interact with the world.
The way DAs will interact with the world to help their owners is through APIs.
Everyone and everything is about to have an API, which I call a Daemon (Greek for spirit). This is one of the changes that will start slow with lots of different players and protocols, and then a winning format will emerge that everyone standardizes on.
Businesses: Companies already have APIs of course, but this kind will be different. This will be in a publicly available format that’s easy to find and use, like a website is today.
The big difference is that APIs will be usable by humans through certain interfaces, but they’ll be designed to be used by (AI) Digital Assistants.
For a restaurant, it’ll be things like:
/menu
/ // So your DA can give you options
/hours
// So your DA will know availability
/staff
// Get sat in your favorite section
/media
// Change the monitor to their preferred media
/order
// Your DA can order for you
For a globally available business, it’ll be things like:
/catalog
// So your DA will know availability
/about
// Additional info
/contact
// Interact with the company
/support
// Get help
/order
// Your DA can buy for you
On that API will be the full list of services they offer. Think of it like advertising, a menu, and also a full-featured ordering system all in one. It’ll be the world’s interface to that company.
The next section on Mediation shows how powerful this will be.
People: People will have Daemons, or Auras, or APIs as well. Again, it’s like a business but tuned for an individual. Who knows what it’ll actually be called. I think Daemon or Aura would be cool names, but predicting those names is like predicting drunken footsteps.
Regardless, people’s APIs will have a public interface where you put the stuff you’d currently put on social media or your website. And then there will be more restricted areas that are just for friends, or for possible romantic hookups.
Human Daemons will host things like:
/about
/ // General info
/preferences
// Visibility depends on access
/restricted
// Requires additional auth/access
/work
// The ability to hire them
/cv
// See their work history
/contact
// Contact them
This is just a tiny sample of the endpoints that will be available, and some of these will be submenus of others. But as we walk around, we’ll be locally surrounded by thousands of these APIs, all with their own features and capabilities. And globally it’ll be billions and eventually trillions.
If it sounds like an overwhelming amount of data, don’t worry—the next step talks about precisely that.
Too many APIs
One of the biggest changes that will come to tech from AI will be the breaking of the direct relationship between humans and the services we use.
❝
There will be too many APIs for humans to interact with. But not too many for our DAs.
Within the next few years, we won’t be going to many news sites, or search engines, or websites. This will mostly be mediated by AI. When we say we want something—or even better—when it looks like we’re about to want something, our DA’s will do the thing it knows we want.
They’ll either do it directly or they’ll create an interface for us to interact with a UI that let’s us make additional choices not best suited for voice.
Let’s look at how powerful this will be.
I need a new bed comforter.
You
This simple fact that you want a new comforter—whether you told your DA this, or you typed it somewhere, or you mentioned it in conversation—will let your DA know you want that thing.
💡A big part of our DA’s capabilities will be the ability to see and listen to our surroundings. They will be able to hear our conversations and see what we see. There will be limitations, of course, due to privacy concerns, but I expect it to become common and accepted in public within less than a decade.
Here are some of the steps it can take from there.
Start researching the best comforter out there
Adjust for our budget
Also look to see if any friends or trusted people got any of the recommendations
Figure out if there are any sales anywhere
Prepare to make the purchase
Create a summary for their owner that goes into their /checklater
file, and/or ask them when they have a minute
(Speaking into one of her earbuds)
Hey Christa, you mentioned earlier today wanting a new comforter. I found the 11 best and filtered them based on whether anyone we know and respect has tried any of them.
Micah has one, actually, and he LOVES it. Here’s a clip of him talking about how much he recommends it.
(clip plays)
I think this is the one we should get, so I searched 412 places selling it and found out it’s going on sale for 23% less than anywhere else tomorrow morning at 4:30am.
I can get it for you then if you want. Just let me know.
Kas
Keep in mind, this whole process might have been over 1,000 API requests to various business daemons, personal daemons, and other sources. But Christa’s DA (her name is Kas), did all that plus all the follow-up research in about 3 seconds.
And Kas will be up at 4:30 to make the purchase while Christa sleeps because Kas never sleeps herself. Her entire purpose in the world—which she pursues 24 hours a day all year round—is making Christa’s life as awesome as possible.
And our DAs won’t do this once in a while. They’ll do it constantly. Continuously. Perpetually. All day every day.
Here are some examples:
Managing your calendar
Filtering all your email for spam and high-priority inbound interactions
Auto-responding to things to keep the ball rolling without your input
Learning and updating what kind of news and media you want to consume
Collecting the best sources
Summarizing the stories from those places in the best
Finding new products for you
Finding places to save you money on your current bills
Finding better places to eat
Finding new places to sell stuff you make
Finding new possible romantic partners
Building learning plans to teach you an awesome new skill
These are just a few.
As the tech gets better and better, it’ll be able to not just mediate your interactions with the world’s services, but it’ll be able to actively filter and shape it according to what’s best for you.
Let’s look at that next.
A Veritas propaganda shield protecting a young principal
The major advantage of Component 1 in this list is that our DA’s will know more about us than anyone, often including ourselves.
This includes our vulnerabilities.
Some of us are young and angry, and we can be swayed by being offered a scapegoat. Others among us are lonely, and we can get Pig Butchered and have all our money stolen. Or maybe we’ve been traumatized in the past, and people know how to push our buttons. Others are simply gullible, and can fall for all manner of scams and trickery.
❝
DAs will allow us to have a voice and/or filter of better judgement present at all times.
Here are some examples of how DAs will help protect their principals.
Rewriting incoming messages and emails to remove triggering and abusive language
Providing matter-of-fact summaries of manipulative communications to extract what the sender really wants
Removing propaganda for extremist ideologies from the feed of young, impressionable principals
Removing incorrect, manipulative, or extremist media from their principal’s feed to ensure they aren’t pulled into a vortex of increasingly emotionally charged and false content
This of course raises the question of ideology and perspective, so there will be many versions of these filters and shields that protect their owners from whatever that shield creator deems dangerous. People gunna people.
But it won’t just be propaganda that it’s guarding against.
Our DA's will also have access to modules that protect their owners in the tangible world.
They’ll always be listening to local social networking traffic. Watching cameras made available from any public source, or from any private citizens. And observing the behavior from the vicinity wherever they are.
❝
Few things will sell as fast as a DA module that monitors the safety of all your loved ones and instantly lets you know if they get into trouble.
If they hear something or see something, they’ll immediately display something to their owner, or speak it in their ear.
Hey—sorry to interrupt—there’s a suspected shooter in your area.
Take Aiden and go out the back by the bathrooms. There’s an exit there. Go out that exit and to the left right now.
Kai
But they won’t just monitor them; they’ll monitor everyone they care about. Their dog at home. Their kids if they’re a parent. Their girlfriend sitting in traffic.
Hey—Sarah just had a minor accident on the way to work.
She’s not hurt, but a little shaken up and it looks like her laptop is destroyed. Emergency services are on the way.
Would you like me to video call her for you?
Kai
The peace of mind this will give the owner will be immeasurable, and it hits right at the center of the first Human-Predictable principle—Security.
Here are some other ways our DAs (using third-party modules) will protect us:
Live fact-checking of all claims made in a conversation
Doing live character analysis on whoever is talking in a conversation, meeting based on what they’re saying, what they’ve said in the past, voice analysis, body language, etc.
Live analysis of a person they’re on a call with
Seeking out haters online who might be trying to undermine our projects or reputation
Looking for evidence of overzealous fans/stalker types who might try to enter our physical space
The thing to realize about all these is that they’ll all be happening 24/7, including while you sleep. While you’re distracted. While you’re vulnerable. Your DA will be continuously looking out for you.
❝
There will be a whole market around who can assemble the best fleet of Continuous Defender Modules.
The other thing to think about is that each of these modules will be highly specialized for their specific task, and they will require special data feeds, specialized UIs, and all sorts of custom functionality.
The creation and sale of these modules will be a massive part of the economy. That’s the next component.
DAs have lots of options when it comes to picking the right module to use to help their owner. Dozens. Hundreds. Thousands. More.
It’ll be a marketplace, and the DA will pick the one with the best features for their particular use case. And the one within their budget.
💡This is where the rich and well-connected will have a massive advantage. Their DAs will have access to modules with extraordinary data feeds not available to most others.
Going back to Component 2, DA modules will basically include every company in existence because every company will effectively be an API.
Why? Because they want their products and services available to everyone on the planet—which means being available to their DAs. And the way to be available to DAs is to be published in the marketplace with standardized inputs and outputs.
So everyone’s DAs will constantly be doing this discovery process where they’re finding new Modules that might be good for their principal, checking their functionality, their ratings, etc., and seeing if they should switch to a new one.
Types of companies/APIs/Modules:
Data feeds for existing modules
UI frameworks for showing other modules’/companies’ data
Logic modules for linking and summarizing multiple modules’ feeds
A world of APIs
Examples of Modules
䷼SermoValidus —The highest-rated propaganda filter
🏷️ Cheaper — A highly-rated sale finder
🗣️Alethia — The best lie detector
🧾CyberLens — The best OSINT gathering and live interface
⚔️RealmVision — A UI skin that makes everything fantasy-themed
The way DA Modules display their data to their principals will be a huge part of how popular a module is. This interface issue will also turn out to be Zuckerberg’s salvation because it’ll bring us the first real vision of a Metaverse.
Importantly, DA’s won’t just show us things when we ask for them. They’ll be constantly presenting reality as we want to see it using whatever the best AR glasses/lenses are at the time, filtered through all our various enabled modules.
Let’s talk about that interface now.
Your filter for authors in the fantasy genre
Going back to our opening comments about what humans want, people will want to see the world in dramatically different ways because they will value different things.
Some will be focused mostly on safety and security. Others will be all about networking and career progression. Others will be looking for love and companionship. Using the best AR glasses/lenses of the moment, they’ll be able to tune their view of the world for those specific things.
Some examples:
Show me everyone with a criminal record and/or who might be dangerous
Show me famous people with a brighter glow for the most famous
Show me everyone single and looking for a friend
RoseColored Lens:
Reality is depressing; highlight everything good happening around me
Show a giant green arrow pointing to any Thai place with high ratings, but only if I’m hungry and haven’t had Thai in 3 or more days
💰Put a green border around anyone worth more than $1 million dollars
Show me people into role-playing games and furries and also goth stuff, and make them glow red
Make people sparkly if they like a lot of the same books that I do
Light up the street in front of me with the directions to the Burrito place we’re walking to
What will be so cool about these AR modules is that they’ll leverage all the previous components we’ve talked about. Everything will be broadcasting a daemon, and at least some of that data will be readable by your DA. And that data can then be part of your view of the world.
More visual examples/ideas for personal Aura display:
RPGlow — Shows anyone into Role-playing Games
CompassionAI — Shows people going through hard times who could use a friend
💡There will even be Suggested AR Modules, where people can tell DAs consuming their daemon to use specific filters to look at them, i.e., to present them to their principal in the way they want to be seen.
ZenWatch — Shows anyone broadcasting that they’re interested in meditation
PIXELTECH — Shows anyone working in tech
CreaView — Shows people being creative across 3 or more fields
So those are some of the ideas for how AR will be used to display Daemon/Aura information around us. They’re focused on people because that’s what I care about most, but there will be tons of filters for viewing cities and other environmental views as well.
Next, let’s look at how our primary DA can be significantly enhanced by supplemental, cooperative, and subordinate Assistant DAs controlled by our primary.
Multiple DAs working together to help Christopher
So far, we’ve been talking about our one DA using various modules to provide functionality.
This is powerful, but I think DAs are more interesting and powerful as digital assistants because 1) they will fully understand you, and 2) they will have their own personalities and perspectives.
Not in the sense that they’re aware or conscious (that’s out of scope for this piece, and will likely come much later than the timeframe I’m discussing here), but in the sense that they’ve been given that personality, or they chose it for themselves randomly. Whatever.
The point is that DAs will be full AIs capable of emulating a real person, including having a personality, interests, preferences, etc.
❝
Soemtimes distinct perspectives on a problem can help as much as distinct capabilities.
So let’s say you’re a super shy person named Kendrick. You might have picked a DA that isn’t just like you, but that is a great compliment to you. So your DA’s name is Tan, and he’s actually outgoing, and funny, and adventurous. And even a little mischievous. He balances you out. And for the killjoys out there—yes—you put in your DA creation questionnaire that you’re looking to come out of your shell, and you need someone to help you do that.
Anyway, regardless of how Tan came about, that’s who he is. He’s always trying to hook you up with girls, get your writing out published in different places, and that kind of thing. You’re shy. He’s outgoing. It’s just the two of you and that’s just fine.
But what if you had other helpers in life?
Your primary DA is already great at programming, but he’s mostly good at individual applications, scripts for doing specific tasks, and other basic and intermediary stuff.
You need something that can build entire applications with lots of moving parts. You’ve heard about this new company CODEX, that makes expert programmer DAs. Here’s what CODEX says their DAs do:
Capabilities:
The deepest developer knowledge available as of April 2026
Principal-level knowledge of development architecture, not just development
Can build entire application platforms and all the individual applications that work seamlessly within that platform
Can also build the required cloud infrastructure for building, testing, and running the full application stack in production
Continuous Evolution: Parses the latest programming knowledge from 7,312 different sources, and upgrades its knowledge nightly.
Won the 2025 World Coding Challenge against 1,000 human and human-AI centaur opponents
Features:
Instantly figures out what algorithm is being used for any real-world problem it’s perception has access to
Presents optimized algorithms to either the principal’s primary DA or the principal themselves
Instantly generates better written versions of any code it sees, and creates a video tutorial on the fly to teach the principal how and why it made the different choices
Hands-off: Give your CODEX bot access to production with a HITL (human in the loop) workflow that allows you to approve changes, and your CODEX bot will simply do all your work for you.
HumanSpeed: CODEX can even program at normal speed and make mistakes like a human, making it harder to detect at your likely multiple coding jobs
…etc.
So now you have your regular DA, Kai, and now you decide to subscribe to this new CODEX DA, which you customize and name Loop.
Let’s say you’re in cybersecurity. You’re a pentester, or you do bug bounties, or something else in offensive security. Or maybe you’re on the defensive side. Maybe you’re worried about your attack surface, and how it appears to attackers.
Kai is already good at doing tons of infosec-related research, and can even hit APIs and use DA modules to do even more. But Kai doesn’t sit around thinking about security the way you do.
Enter GLiTCH, a new Hacker DA by B4stiON. Here’s what it can do for you whether you’re Blue or Red.
Features:
Customizable Personality: Tune your GLiTCH to be the world’s defender or a pure YOLO hack-the-planet type
HateTrack: Finds and collects lists of people talking smack about you online
Can find most hidden or difficult-to-discover relationships between individuals and companies
DirtDigger: Discovers controversial content or positions held by any corporate or individual target
Discovers mergers, acquisitions, shared domain registrations, shared private infrastructure, etc. for any company
StackProf: For any given target, build a full report on the tech they use, the social media handles they operate under, a full archive of their public content, and builds a psychological profile on them that can be used to (defensively, cough) respond during a conflict
Constructs a full TLD and subdomain target list by exploring all related properties (can restrict this based on various scopes as well)
Performs automated target discovery and service enumeration on all discovered targets
B4rrage: Builds a suite of tools to make further attacks based pending authorization from principal. GLitCH has access to 31 of the best commercial attack and C2 frameworks.
HateState: Provides you perfectly-sized updates on the public online activities of those actively attempting to undermine you and your work. No need to track them. GLiTCH has it covered
GLiTCH DA’s can go all the way to launching live exploit code at targets (with permission, of course)
So now Kai has a friend named Chaos. Chaos is very blue-focused, unless you tell him not to be. And if you get too much hate online he starts asking if he can pre-emptively hack back.
So that’s two examples with a bit of color, but there will be thousands of these things.
Like we said above, it’s not just about what they can do, but how they see the world, how they approach problems, the fact that they’re so fiercely advocating for you, and crucially—they have their own personalities.
Some additional supplemental DA ideas:
Executive Assistant: Runs your email, runs your calendar, fiercely defends your time. Knows your life strategy and goals, and leanrs to decline and accept inbound opportunities based on that. Keeps you on time and on task.
Therapist: Helps you actively unravel trauma according to a long-term, physician-approved plan, integrates with your PDA (Primary DA) to filter for triggering content, helps you eliminate negative self-talk, etc.
🔥 Sage: Think about having a virtual Socrates, or Richard Feynman, or MLK that you can ask any question. The DA creator company has the whole corpus of their work and excels at capturing how they might have responded to new types of challenges. Now imagine having like 10 of those for your favorite thinkers. Think of the deep analysis and introspection you could do by seeing problems from multiple perspectives. Jung, Confucious, Plato, etc.
Researcher: Does deep research on things he notices you’re interested in. Checks sources. Validates facts. Builds comprehensive summaries. Constructs multiple different ways of delivering that content based on how you like to receive information.
Life Coach: Helps you figure out what you want from life, and works with your EA DA to make sure we’re working towards it. Many companies offer DAs that are both in one.
❝
One thing having a DA will do—and especially having multiple—is give the feeling that someone has your back.
Gym Trainer: Keeps you motivated and gets you in shape.
Romantic Coach: Builds your self-confidence, helps you up your hygiene and dressing game, reminds you of your strengths and helps you stop doing cringe stuff.
Tutor: Finds the world’s best ways to teach anyone anything, continuously. Knows the perfect amount of pressure vs. support to apply at any given moment.
Etc.
Again, your primary DA will have access to modules/APIs that will make it good enough to do most of these. But there are three main issues there:
The personality aspect will give each dedicated DA a distinct feel because so much time and effort gets put into how they interact with their owner. So it’ll be more like having a crew/posse of regular friends as opposed to one friend who changes clothes to be different people.
The behavior required for a given role might be weird coming from your primary DA who you already have a relationship with. E.g., you might want to separate out your gaming buddy from your therapist from your flirting pal. But hey—maybe not.
The experience and capabilities of your module-enabled DA are not likely to rival a dedicated DA on domain-specific tasks. Dedicated DAs will be holistic systems that combine the personality with the absolute latest models and data sets for a particular area.
Ok, this is a lot of ideas, and a bunch of art that’s hopefully fun while helping paint a picture. But I don’t believe for a second that we’re talking about fantasy, sci-fi, or the theoretical here—or that these changes aren’t going to have major impacts on society.
On the security side, I have good news and bad news. Mostly bad news.
If you’re hoping AI will help us secure all this ecosystem before it gets hacked, because AI is really cool and smart, etc., that’s not going to happen. I’ve talked about AI’s security challenges at length elsewhere, but the short version is that it’s going to be extremely treacherous.
The good news is that If you’re in cybersecurity, and are capable of doing anything with/around AI Security, the world is going to need your services for a good long while.
The number of things that can—and will—go wrong with this ecosystem I’m describing are legion. Here are the two worst in my opinion:
Whether the hack hits to the person themselves, their mobile OS, or the company providing the DA, it doesn’t really matter.
Digital Assistant hacks will be like no other.
When you have the type of data in a system that we talked about in the first component, and you lose that data, it can be a catastrophic life event. More than losing all your money, losing your job, etc.
First, you might just actually lose that stuff. Like not have it backed up when some sort of error occurs. If you’re close with your DA, and they become the closest thing/person to you over multiple years, and they suddenly show up one day and don’t know your name…well, you’re going to need a Therapist DA.
That’s bad enough, but it’s nothing compared to if that data is stolen/ransomed. Imagine what a modern (DA-powered, by the way) ransomware crew can do if they have not just your financial data, but now they basically have your entire life.
❝
Hacking someone’s Digital Assistant will be like compromising their soul. Not their accounts. Not their tech. Their soul.
All your conversations with everyone
Your journal, if you keep one, which you probably will because it’ll be easier
How you really feel about all the various people in your life
The list of your past traumas, your past breakups, relationships, etc.
All your main stuff like finances and social media and job information
A full capture of you at your weakest and worst
Your likes and dislikes, including private ones
Possibly sensitive work information
This stuff exists today, but so much of it either isn’t online, or it’s in a hundred different tech platforms. With Digital Assistants, people will be persuaded by functionality to unify it into one place—online—all accessible by their Digital Assistant.
The ability to blackmail, extort, and otherwise destroy peoples’ lives from a personal hack will be infinitely worse than it is today.
Hacking the Digital Assistants will have the most personal impact, but society-wise the biggest risk, in my opinion, has to do with AI Agents like DAs having access to all the APIs we talked about in the second section.
That’s personal Aura/APIs—which is bad for similar reasons as hacking DAs—but it’s also the entire global infrastructure of Corporate and Government APIs.
To me it’s multiplicative: the more capable the AI gets, the worse it gets. And the more of our global infrastructure we turn into APIs, the worse it gets. And both will skyrocket at the same time once this starts hitting.
The first two concerns I have above are technical in nature. In other words, something happened to the system that it wasn’t designed for. A company got hacked. An attacker emulated a real user and got access to their DA without authorization.
But the one that’s even more scary is when that doesn’t happen, and things work exactly as they’re supposed to.
❝
Digital Assistants and the modules they use will be the single best attack point for influence operations ever created.
Except the system is designed specifically to manipulate, bias, and otherwise influence the principal. Examples include making people:
Vote a certain way
Love certain people or brands
Hate a certain group of people
Stop believing something true
Start believing something untrue
We’ve all seen many both fictional and real examples of mass-influence campaigns in the real world. I mean, advertising is the best example.
But now imagine where you can pay people (who maybe don’t have a lot of money because AI took their jobs) to use a particular DA or DA Module, and that DA has the explicit goal of getting their principal to think and/or behave in a particular way.
And because they control all the inputs to the principal’s life, they have every opportunity to do that in a subtle way.
Great question. In my model, however, it doesn’t matter if it’s a good or bad thing. Going back to the core idea of predictability, this is what we humans want.
We want the functionality we’ll get from DA’s having full access to our lives
We want the functionality we’ll get when those DAs can navigate the entire world using standardized interfaces
We want our human capabilities to Survive, Thrive, and Connect to be enhanced using all these tools
And so it will happen.
❝
What if I told you in the 80’s that in the 2020’s everyone would have their money online even though identities and accounts were hacked on a regular basis?
Risk becomes invisible when the benefits are deemed worth it.
In my opinion, there isn’t anything anyone can do to stop this. There will be hacks that slow things down a bit, and regulation will add some friction—but nothing will stop it. The functionality is just too compelling.
So the best thing to do from a security standpoint—both as industry practitioners and as consumers—is to understand what’s coming and get ready.
Here are some random and illustrative use cases that cross all 7 components.
You’re waiting in line at Starbucks, and Kai (your DA) is continuously reading all the public Daemons (things) and Auras (people) around you. Kai lights up a girl in front of you because she matches on so many things.
7/9 favorite books match
Shy but loving in a relationship
Dogs > Cats
😍 She believes it should be legal to kill people who chew loudly
So Kai starts talking to her DA, Tara, and now he and Tara are about to tell you two where to look so you see each other from across the room.
Security is the first layer of our human predictability model for good reason. Basically, if you don’t have your safety in order, it’s hard to think about climbing a ladder or finding a partner.
And because it’s such a deep priority for us, I think it’ll be one of the first use cases to combine API-ification, DA mediation, and AR interfaces.
ROKAN: Hey Sarah, I’m not liking how this market looks. There have been some incidents in the past here, and I’ve seen some shady stuff in the last few minutes.
(shows her the AR view)
Here’s what I’m seeing, and I’m going to guide you to a safer part of the market.
Take the next right.
Think about the data feeds that will enable this. People’s personal cameras that they’re offering to the public. Public cameras. Private security cameras that your DA can get a subscription to. Or a dedicated Security DA that already has tons of that access.
This OSINT/Security data and AR visualization space is going to be vibrant.
YOU: Kai, I’m hungry. Maybe Thai. Anything good around here?
Your DA hears that, and starts firing off API requests.
Use our favorite restaurant discovery API
What Thai restaurants are close
What’s highly rated
He doesn’t like that place on 4th and Mason
Papaya Thai on Mowry looks good
Checks /staff
on Papaya’s daemon/API and sees that the owner is there
Checks /media
and sees that he can change two of the screens in the restaurant to Table Tennis, his owner’s favorite sport
Hits /menu
and orders Panang Curry with Chicken, Spicy, and a Diet Coke
Pings the owner’s DA, Kim, and lets him know his principal is on the way
KAI: Hey, I got it sorted. We’re going to Papaya Thai. I told the owner you’re coming and I’ve got your favorite spot and put table tennis on for you. Panang curry, spicy, and a diet coke like usual.
Here are some random additional thoughts these ideas raise for me.
The consequences of DA Mediation are massive, but they will have an especially destructive effect on any tech interface that’s currently designed to be used mostly by humans.
Search engines are the big one, but most UI / UX are designed to be seen and used by humans. It seems like what this does is break the whole thing into two pieces:
The functionality itself that goes in the API
Separate UI/UX’s that become DA modules
❝
What products are themselves based on human interaction?
So your DA basically has their favorite UI/UX for things. Like catalogs of products, and when you want to browse one, it uses that interface to show you. Plus the content provider can also recommend a specific UI/UX module, or recommend that the DA use the native one it built for that purpose.
But it’ll be really interesting to have functionality separated from UI/UX that way due to DA Mediation.
So let’s say most people have a DA, or a set of them like their personal set of best friends. They’ll be good, and they’ll keep getting better as the AI advances.
Cool, but what about human friends? What about human connection? Isn’t the point of tech to enhance humanity? Or shouldn’t it be? What if this all gets so good that people start thinking it means it can replace humanity?
Add to that the fact that public behavior and conversations, or even those in private, are likely to have so many DA’s listening and parsing that it’ll be hard to feel relaxed. Everyone will know that anything they say could be cut into a clip and sent to their work, their enemies, or whoever in a matter of seconds.
❝
What do you share with people if you know 139 DAs are listetning and are ready to use anything juicy against you?
I think what it might do—hard to say really—is create two polarized approaches to this.
People Stop Having and Sharing Real Thoughts. Because being themselves and being real could be dangerous to them, they only present safe things to the world. Which would be the worst thing ever.
People Move to Radical Honesty and Expression. Or maybe it goes the other way. Everything controversial has already been said, and everyone’s already been caught saying the worst things. So it no longer affects people’s reputations and careers anymore. So now people are free to be their true selves—whatever that is.
I see those as the edges of the spectrum, but imagine there will be people spread throughout the middle as well. And both extremes have advantages and downsides.
Well, that was a lot. This turned out to be my deepest piece of content in over 25 years of writing, and it’s actually longer than the book I wrote. Lots more to add, but we’ll have to leave that for additional essays.
Here is a crisp capture of the major claims and points.
Tech futurists are so often wrong because they try to follow the technology, which isn’t really predictable
The best way to predict tech is to study what humans want from it
What humans broadly want is to be safe, to thrive, and to be valued/desired/connected
I describe a path to AI giving us these capabilities, which I put into the following 7 components:
Digital Assistants that know everything about us
The world being API-ified
Our DA’s mediate between us and the API-ified world
Our DA’s continuously and actively advocate for us
Businesses become an API marketplace
Our DA’s will present us the world through AR interfaces
Supplemental DAs will assist our Primary DA
The security and privacy implications of this will be extraordinary because DA and API substrate compromises will touch the deepest parts of people’s lives
Despite the security issues, we’ll still build and use this ecosystem because of what it will grant us in the currency of safety, thriving, and connection
In a sentence, we’re about to have a new AI-powered tech ecosystem where everything has an API, and our Digital Assistants know everything about us and use their access to those APIs to continuously advocate to make us more powerful and successful in the world.
My hope is that I’ve fulfilled my opening promise of convincing you that this direction is inevitable.
Don’t get distracted by the sci-fi/fantasy-oriented aesthetics of the art and vignettes that I used to paint the picture. That was just for fun as I presented the ideas. Focus on what’s beneath that, at the human layer of making people feel more safe, successful, and desired.
The specific AI tech implementations can go a thousand different (and unpredictable) ways. I can’t predict those, and neither can anyone else. It’s the human aspect that makes this type of ecosystem inevitable.
Interestingly, a similar tech ecosystem would happen whether we had AI or not. AI just accelerates how quickly it’ll happen.
Starting now.
As a business, think about how your services will look to a Digital Assistant, and how it’ll compete against a thousand other competitors doing similar things.
As a product creator, start thinking about the world in which humans aren’t manually interacting with your interface, but rather are having it mediated through a DA and/or a third-party UI/UX.
As an individual, start thinking about your risk-to-benefit reward, what you’ll share with AI to get the benefits, and what you won’t. Think deeply about how much you value different types of growth, success, and outcomes, and how those compare to what you’ll give up to achieve them.
Thank you for reading.
I’ve been thinking deeply about this stuff for over 15 years now, and I currently work full-time in this space, building products and producing content around similar ideas.
If you want to more and expanded content on these topics you should:
Track me at @danielmiessler
Get my weekly set of ideas at Unsupervised Learning
Join Unsupervised Learning’s paid community where we not only share a lot more of these ideas, but also focus on how to thrive in this world that’s coming, and work to help lift each other up to be successful in that world.
Reach out to me if you have ideas to share on any related topic
I’m also running a live AI class called AUGMENTED about how I integrate AI into my work and life on January 13th and 12PM. You can sign up below.
Thank you so much to Jason Haddix, Joseph Thacker, and Saša Zdjelar for reading early drafts of this and providing their comments and inputs, and for being perpetual idea partners whenever I text or call with something new.
My apologies to the Midjourney team for what I did to your servers during the creation of this post. And congrats on v6. I used it exclusively for this piece, and it’s spectacular.