
As Margot Robbie in a bubble bath taught us, the finance industry has a trick they love to use. Take a simple concept, wrap it in complex language, and suddenly you need a specialist to explain what would otherwise be obvious. A bet against a loan becomes "credit default swap." A bundle of mortgages becomes a "collateralized debt obligation." The problem is the people selling this stuff have every reason to keep the terms confusing.
The AI industry is playing the same game.
All of these terms have simple definitions: "Agents." "Inference." "Agentic workflows." "Retrieval-augmented generation." "Multi-agent orchestration."
Every few months, a new term shows up that sounds like it requires a computer science degree to understand, and that's kind of the point. They want to sound smart. Unfortunately, complexity can be a moat. If the language feels intimidating, you're more likely to buy someone else's solution instead of building your own.
So, I'm going to lose some clients here by making this as easy to understand as it should already be.
I've spent over a year building a system that runs a significant portion of my day — you can read about it here. It handles my task management, full meeting workflows, creates marketing content, writes proposals and scopes of work, acts as our CFO and Director of Operations, and is continually adding to its capabilities.
And this magical creation is actually just a folder on my computer.
How I got here
I started this as an experiment to see if a ChatGPT project, set up as a daily assistant, could meaningfully improve my days. And it did. I'd start it in the morning, and it would walk me through my day. It would capture and track tasks, priorities, meetings, reminders, etc. I would regularly use this project as an example of how powerful ChatGPT projects could be while speaking at conferences or leading our training sessions.
It was genuinely useful, but also had a pretty low ceiling. While ChatGPT has memory across project chats, it can be inconsistent. Keeping instructions and documentation up to date was a painfully slow manual process. So the context the AI was using was often outdated or inaccessible. I was re-explaining a lot and it wasn't all that useful to hand off real work to. It was more of an organization and coaching tool.
When I started using Claude Code (a tool that lets an AI work directly with files on your computer), I tried something different. I gave it access to years of my personal journals. The results were pretty great. It started connecting dots I didn't even remember existed. Patterns in how I work, decisions I'd made months or years ago, things I'd written about my goals and priorities that I'd forgotten but the AI could now reference and use.
From there, I started giving it more. First, company information that would allow it to have the same type of context it got from my journals but for the organization. Then rules about how we operate and templates we use for different types of work. Each layer added made the tool more capable, and eventually it could finally do the actual work, not just organization, the way it needed to be done.
One of the things we attempt to do when helping our clients with AI adoption is to surface the practical use cases for the tools. We want to make these things real for businesses. One of the things I learned through this experimentation is that this is all much simpler than it seems. Building the system out simply meant telling the system what it needed to do better. That's it. Over and over, using the system to improve itself.
When a task didn't go well, I'd write down what went wrong and what it should do differently next time. When I needed it to handle a new type of work, I'd write the instructions to give it access to the right information. Now I have it keep track of its own mistakes and desires. And we use that every week to make improvements to the system.
Over time, that turned into what the AI industry calls "agents." Or what Margot Robbie in a bubble bath would call folders on your computer.
Agents are just folders
An agent is a folder with a few things inside it: written direction for the AI, access to the tools it needs, connections to other software (the industry calls these "MCPs," which stands for Model Context Protocol, but you can think of them as bridges between the AI and your other apps), and knowledge it needs to have better context.
That's the whole concept. A folder with instructions, tools, connections, and knowledge.
Kieran Klaassen, the sole engineer behind an AI email product called Cora, wrote about this recently. He spent 3 months trying to build complex "agent swarms" (multiple AI agents coordinating with each other autonomously). He says none of it worked. What actually worked was pointing the AI at a well-organized folder. He runs 44 agents now, and each one is a folder with accumulated context built through months of real work.
Andrej Karpathy, former head of AI at Tesla and early OpenAI researcher who is one of my favorite follows on practical use cases of AI, calls the underlying idea "context engineering" (giving the AI the right information at the right time). Anthropic, the company behind Claude, recommends this exact, simple approach. Organized folders with instruction files, supporting knowledge, and clear structure.
So this isn't anything new. I'm just following the instructions they've told us would work. The difference is I'm applying it to actual work within our business, not as an engineer, but as somebody who wants these tools to be able to do better work for us. And this approach is accessible to anyone with curiosity and a commitment to invest time now to save time later.
What this looks like in practice
Every week, our system scans specific Slack channels, our blog, Asana, and Google Drive for signals that would make good marketing content. It surfaces those things and leads people through a series of questions to turn the findings into social posts, website articles, and newsletter content.
This approach puts the human in the lead and the human in the loop. Two critical components to making these tools work well.
The AI finds the raw information and does the first pass of drafting. But the human brings the judgment and taste to add our unique perspective and ensure it meets our quality requirements.
Another example. We used to have a complex handoff process for new clients that involved pulling together meeting notes, emails, and proposals into a scope of work that matched our agreement structure. Now our system has access to all of the relevant information. It can go find the meeting notes from the Granola MCP. It can use the Google CLI to find all relevant emails and the proposal, then follow our scope of work template in Google Docs, fill it out according to all of the gathered information, and then set it up for human review and client signature.
Hours of scattered work consolidated and happening in the background.
This kind of document creation from a lot of existing data that needs to match a specific format is one of the most common use cases I come across in companies. But they usually struggle to get the off-the-shelf tools to work correctly. To bridge the gap, we've built similar systems for our clients. If you have information scattered across multiple places that needs to come together in a structured way, an AI with access to those places and clear instructions can do that work well today.
The cost of good agents
I've sat in rooms and talked about this setup before and people have responded by asking if they could see my setup. My response is of course, but with a caveat that I'll bet they won't adopt it even after they see how it saves me a massive amount of time. Because it's a new way of working many people just aren't comfortable with yet.
It takes time to set up. The tools take time to learn (though, not all that much, really). And the systems require constant maintenance and improvements to create value.
I spend a third to half of my day working on the system. Adding new skills, fixing things that didn't work well, giving it access to new information, and building out new agents (folders with instructions and information). The other half to two-thirds of my day becomes dramatically faster because the system handles work that used to take me a lot of time.
Because the whole system is just files on my computer, it doesn't belong to any AI company. The system was originally built with Claude Code. Now it's primarily driven by Codex. But I can still use it with Claude Cowork, or Cursor, or as markdown files in Obsidian (a note-taking tool). It's incredibly flexible and modular. Which is great for a time when the leading model changes monthly. Plus, the system is primed to work with secure local models when they become powerful enough (which I estimate is 12-18 months away).
I will admit that there's a risk of tinkering with these tools as a way to feel productive while actually procrastinating. It is very important to ensure that you create baselines on how long things take you so that you can see the real ROI.
And if it isn't there, evaluate why.
That ratio does improve over time. The more you build, the more the system can do, and the easier it can build itself. But early on, it's certainly an investment. We are slowing down to speed up. I've seen that idea make people very uncomfortable. Me too sometimes, especially with superpowers that allow us to move so fast.
But this is the future of work. We're all traveling to the same place, at different speeds. If you put in the time, you get the reward. If you're looking for a shortcut, stick with off-the-shelf ChatGPT or Claude.
Where to start
If you want the baby steps version, download the Claude desktop app. Switch to Claude Cowork (they offer Chat, Cowork, and Code). Point it at a folder on your computer, any folder anywhere. Tell Claude to help you draft a CLAUDE.md file (the base instructions for the agent, it'll know what this is) and to grill you with questions to set it up properly.
Your agent now has a foundation to build on.
From there, it's a loop. Use the system. When something doesn't work, tell it what to do differently and update the system to ensure it works next time. When you need it to handle something new, write the instructions (with or without the AI). Every time you do that, the folder gets smarter, the AI gets better at your specific work, and the investment compounds from day one.
The only requirement is that you remain curious and carry a willingness to spend time building something that will pay off over time. That can seem like a big ask when you have a looming deadline, but on the other side, the deadlines aren't nearly as intimidating.
