- The Leap
- Posts
- Agentic AI
Agentic AI
Understanding AI — Part III: Agents
50 First Dates
In the 2004 romantic comedy, 50 First Dates, Henry Roth (played by Adam Sandler) is a playboy who falls in love with Drew Barrymore’s character, Lucy. His biggest hurdle to winning her affection is that she suffers from short-term memory loss.
Each day she wakes up with no memory of what occurred the day before. Henry must start from scratch, re-introducing himself and wooing her all over again.
LLMs (Large Language Models) we discussed last week suffer from the same condition. They’re prediction machines that take a prompt and provide the most likely response based on how their model or transformer was trained.
Despite their computational power, they have no innate memory.
Solving the memory problem was the first step in turning a chat bot that could perform simple prompt → response loops to a system that can operate in the real world.
Lack of memory wasn’t the only limitation.
Engineers wanted to provide them ability to perform multistep tasks beyond simple prompt and response.
Could we interact with them beyond their native chat applications?
Providing memory, skills, tools and the ability to operate in different environments is what transforms a simple chat bot into a functioning AI agent.

Source: The Leap
Memory
In 50 First Dates, Henry Roth solves Lucy’s memory problem by recording a video every day.
Each morning Lucy watches the tape to understand:
who she is
what happened the day before
who Henry is
why they’re together
The tape reconstructs yesterday’s context.
Adding memory to a LLM achieves the same goal.
Instead of VHS tapes, they store information in various memory systems.
These can include:
Short-term memory
Recent conversation or task context that gets passed back into the model during the same session.
Example:
“Here’s what the user said earlier…”
Long-term memory
Persistent information such as user preferences that are stored in a text file or database.
Example:
“The user prefers concise explanations.”
“The user is a Fractional CFO.”
Working memory
Temporary notes the agent writes to itself while solving a problem.
Example:
“Step 1: Find the latest revenue numbers.”
Every time the agent runs, it retrieves relevant memories and feeds them back into the LLM.
Just like Lucy’s morning tape.
Without this step, each time we engage with the model it would have no idea where to begin.
It would treat the latest interaction as the first, just like Lucy.
Tools
Without a VCR, Lucy wouldn’t have been able to view her daily video and her world would have remained small.
An LLM alone can only generate text, images or whatever output it is designed to produce.
Like Lucy’s VCR, tools give agents the ability to take different kinds of actions other than content generation.
Examples might include:
Searching the internet
Executing code
Accessing databases
Managing email
Talking to other applications via APIs (Application Program Interfaces)
Accessing file systems
Now the brain can do more than provide responses.
It can act.
Example:
If you prompt it with, “What is the weather today?”
The agent can:
Call a weather API
Retrieve the forecast
Summarize it
Provide recommendations on how to dress
By providing access to tools, an agent’s capabilities extend far beyond predictive responses.
Being armed with such tools, how do they know what to do with them?
Skills
Each morning Lucy would find a VHS tape labeled “Good Morning, Lucy.”
The tape didn’t just remind her who she was.
It also explained what to do next.
Think of that set of instructions as a skill.
Skills are predefined capabilities an agent can use to complete specific tasks.
Tools provide access to systems, while skills provide the instructions for how to use them.
Examples of skills include:
how to write a daily briefing and what information it should include
customer service guidelines for responding to support tickets
the steps required to update a database
how to log into an account and perform a specific activity
Skills answer a simple question:
“How do I perform this specific action?”
For example, if the agent’s planning step determines it needs to research competitors, it might decide:
“Search for competitors.”
The agent then executes the Search skill, which contains the instructions for how to perform that task using the available tools.
In other words:
Tools provide access.
Skills provide the instructions.
So how do the agents decide which actions to take in the first place?
Planning
In 50 First Dates, Henry doesn’t just hand Lucy a video and hope for the best.
The tape walks her through what’s happening in her life and what to do next.
It provides context, but it works as a guide for her day.
In many ways, it acts like a plan.
Early versions of LLMs didn’t have this ability. You would ask a question and the model would produce a single response.
But real-world problems rarely work that way.
They require multiple steps.
So one of the first improvements developers introduced was a planning loop.
Instead of answering once, the system repeatedly works through a cycle:
Think about the task
Break it into steps
Execute a step
Evaluate the result
Continue until the task is complete
This pattern is often called the Reason → Act → Observe loop.
It allows agents to tackle problems that require multiple moves rather than a single answer.
For example, imagine giving an agent the task:
“Research three competitors and summarize their pricing.”
The agent might work through a sequence like this:
search the web for competitors
open their pricing pages
extract relevant information
compare the pricing structures
write a summary report
Each step informs the next.
Just like Lucy using the video to orient herself each morning, the agent constantly checks its progress and decides what to do next.
All of this activity is orchestrated around the LLM, the brain that reasons through each step of the plan.
Environment
Despite her disability, Lucy doesn’t exist in isolation.
She lives in a world filled with people, places, and things she interacts with every day.
Everywhere from the café she eats breakfast, the art studio where she paints, or the boat where Henry works.
Those surroundings shape what she can do.
AI agents work the same way.
They operate inside digital environments where they can take actions and interact with systems.
Environments might include:
your computer
a Slack workspace
a customer support inbox
a company database or data warehouse
The environment provides the surface area for action.
It’s where the agent can read information, update records, send messages, or trigger workflows.
The environment allows an LLM to operate outside a simple chat bot.
The Next Step
If LLMs are the brain, agents are everything built around it that makes that brain useful in the real world.
Memory gives agents context.
Tools give them the ability to take action.
Skills give them instructions for how to perform specific tasks.
Planning helps them work through multistep problems.
Environments give them a place to operate.
That combination is what transforms a chatbot into an agent.
The practical question for business leaders is not whether agents are coming.
It’s where they could be useful in your business.
So as you think about your own business this week, consider this:
What tools, skills, and processes could an agent take over, assist with, or accelerate?
The answers may reveal your first real use case for AI.
And before you hand over too much responsibility, there’s an equally important side of this conversation.
Next week, we’ll explore the risks of AI.
We’ll discuss where these systems can go wrong and how to think about adoption with both optimism and caution.
My goal with The Leap is to provide you each Saturday with the knowledge, tools and lessons learned to help you get started and keep going toward building your future.
Whether you are making the leap to startups, solo-entrepreneurship, freelancing, side hustles or other creative ventures, the tools and strategies to succeed in each are similar.