- The Leap
- Posts
- Risky Business
Risky Business
Understanding AI — Part IV: AI Risks
Understanding Risk
For all the excitement around AI that I see in my online circles of builders and tinkerers, most of my day-to-day conversations with people in the real world revolve around uncertainty and risk.
Will AI take my job?
Will humans lose their sense of purpose?
Will we be able to tell what’s real from what’s AI-generated?
Will data centers consume enormous amounts of energy and water?
These are all valid questions. But before diving into them, it helps to first establish a simple framework for thinking about risk.
I spent nearly two decades as an institutional money manager and market strategist. In that field, you’re essentially a professional risk taker. Understanding risk and how to evaluate it was part of the job.
Here are two principles that can help frame the risks around AI.
Rule #1: Change is Not Inherently Risky
The investor Howard Marks put it well:
“Risk in NOT volatility. Risk is the probability of loss.”
Just because something is changing quickly doesn’t mean it’s dangerous.
Take riding a roller coaster. It’s a volatile experience with fast drops, sharp turns, and loops. Yet the risk is extremely low. The odds of dying on a roller coaster are roughly 1 in 750 million.
Now compare that with driving a car. Your lifetime odds of dying in a car crash are about 1 in 107.
In other words, the drive to the amusement park is far more dangerous than the ride itself.
So why do more people fear roller coasters than driving?
Partly because we’re in control of the car but not the coaster. And partly because driving has clear economic benefits like commuting, transportation, and daily life.
The roller coaster is just entertainment.
The perceived value of driving outweighs the risk.
Keep the ideas of control and risk-reward in mind as we discuss AI.
Rule #2: Mind the Second Order (And Beyond) Effects
In the 1990s, wolves were reintroduced into Yellowstone National Park after being absent for decades.
The obvious expectation was simple: wolves would hunt elk, reducing the elk population. That’s a first-order effect, the direct result of a the wolves’ reintroduction.
But the more interesting effects came later. The wolves changed the rivers.
Because wolves hunted in open valleys, elk began avoiding those areas. With less grazing, vegetation flourished. This not only stabilized the river banks but allowed beavers to return and build dams, which created wetlands.
A single change cascaded through the ecosystem, ultimately changing the path of rivers.
These are second and third-order effects, the ripple effects that follow the initial change.
AI, like nature, exists within complex systems. The most important consequences may not be the immediate ones we expect, but the ripple effects that appear later as the technology interacts with society, the economy, and human behavior.
Even if we can’t predict them perfectly, we should still consider them as they can often have magnitudes greater impact.
With our risk framework in mind, let’s look at some of the most common risks associated with AI.
Technical
If you’ve followed along in this series, you understand that AI isn’t intelligence per se but a prediction machine who’s predictions can be VERY wrong. Where are the most common sources of technical risk?
Hallucinations - AI models are trained to confidently give predictions, not verify facts. If they are missing data or the prompt is ambiguous, they often won’t seek more information or clarification but will provide an answer instead of simply saying, “I don’t know.”
Training Data - The predictions AIs generate are based on patterns recognized during training. Users of AI models don’t know the sources or whether those sources contain biases, malicious data or are simply outdated or incomplete causing the model’s accuracy to drift over time.
Data Privacy and Security - In order to improve the utility of AI agents, they will need to be exposed to increasingly more secure and private information. In addition to your private information ending up on a server somewhere, AIs can easily be fooled into divulging such information.
To mitigate these risks, we all need to maintain a basic understanding of how models work and how they’re trained. Companies need to be transparent about the data sources and biases within in their models. Until proven otherwise, tread with caution regarding how much personal information you divulge to AI models and agents.
Human Interactions
Not all AI risk comes from the technology itself. Some of the most important risks emerge from how we humans interact with these systems. When tools become powerful, convenient, and widely used, our behavior adapts to them in ways we don’t fully anticipate.
Sycophancy – AI models are designed to be helpful and agreeable. In many cases, that means they will reinforce a user’s assumptions rather than challenge them. If a prompt contains a flawed premise, the model may build on that premise instead of correcting it, giving the user a confident but misleading response. There have even been cases where AI bots have reinforced suicidal ideation.
Dependency – As AI tools become more capable, it’s natural to rely on them more frequently. Over time, users may begin outsourcing tasks that previously required their own judgment, analysis, or creativity. The more convenient the tool becomes, the greater the temptation to use it while our own skills atrophy.
Loss of Agency – As AI systems move from answering questions to making recommendations and decisions, humans may gradually surrender more control. Hiring decisions, financial choices, medical guidance, and other important judgments could increasingly be influenced or even made by AI systems. When that happens, responsibility becomes less clear and human oversight can erode.
To mitigate these risks, we should perceive AI as an assistant, not an authority. Users should maintain skepticism, verify important outputs, and remain actively involved in decisions rather than delegating them entirely to machines.
Systemic
Some risks from AI don’t come from individual models or how people use them, but from how the technology reshapes entire systems including economies, institutions, and infrastructure. When a technology becomes powerful and widespread, its effects ripple through society in ways that are difficult to fully anticipate.
Societal Disruption – AI has the potential to automate not just routine labor but many forms of knowledge work. While new technologies historically create new jobs, the transition can be painful and uneven. Entire professions may shrink or change rapidly, leaving workers needing to retrain or adapt faster than previous technological shifts have required.
Malice at Scale – AI dramatically lowers the cost of creating sophisticated tools for surveillance, cyberattacks, propaganda, and warfare. Governments and malicious actors can use AI to analyze massive datasets, generate persuasive misinformation, or automate attacks. What once required large teams and resources can increasingly be done by smaller groups with powerful software.
Environmental Impact – Training and operating advanced AI models requires enormous computing power. As demand for AI grows, so does the need for data centers, electricity, and cooling infrastructure. This raises concerns about energy consumption, carbon emissions, and water usage, particularly as companies race to build larger and more capable systems.
Managing these systemic risks will require individuals, governments, companies, and institutions to consider how AI is deployed at scale and balance innovation with responsible oversight.
The Next Step
The risks discussed above are not exhaustive. I also intentionally left out perhaps the most uncertain and potentially most consequential risk: emergent behavior.
Whether AI ever becomes sentient is impossible to predict. What we do know is that AI is a complex system, made up of trillions of connected parameters.
Our own brains are also complex systems. Trillions of neurons interact in ways that somehow produce consciousness, something science still struggles to fully explain.
What ultimately emerges from these digital brains remains to be seen.
As you think about how AI will shape the coming years, it helps to apply the risk framework outlined above.
Will data and privacy concerns limit AI or turn it into identity theft at scale? Will new technologies like blockchains help secure our digital identities?
Will AI concentrate power and surveillance or continue the long trend of technologies that expanded prosperity and reduced human toil?
AI data centers may consume enormous amounts of energy. At the same time, they could accelerate the development of new energy sources and efficiencies that were previously uneconomical.
The truth is that none of us know exactly how this will unfold.
But if you are not informed and engaged in the conversation, you will not be part of shaping the outcome.
Next week, I will conclude the AI series by discussing how to leverage AI in your business.
My goal with The Leap is to provide you each Saturday with the knowledge, tools and lessons learned to help you get started and keep going toward building your future.
Whether you are making the leap to startups, solo-entrepreneurship, freelancing, side hustles or other creative ventures, the tools and strategies to succeed in each are similar.