Journal
How to Build AI Agents Without Writing Code
I can’t write Python from memory. I don’t know what a decorator does. I build AI agents for real businesses and have deployed them to clients running health coaching programs, crypto mining compliance, and real estate operations.
The first agent I built was OpenClaw, a chatbot that queries my YAML knowledge base. I described what I wanted in plain English, gave Claude Code the folder structure, and told it to build a tool-calling loop with memory. It worked on the first afternoon. That afternoon changed how I think about who gets to build with AI.
”Building an Agent” Without Code
The phrase “build an AI agent” sounds like it requires a CS degree. It doesn’t.
An AI agent is a loop: receive input, read context, call tools, return output. You build one by defining three things:
- The context it reads (files, databases, APIs, documents)
- The tools it can call (read a file, search content, send an email, update a record)
- The rules it follows (a system prompt, a skill file, a CLAUDE.md)
You don’t write the loop. Claude Code, n8n, or your framework of choice handles the loop. You design what goes into it.
Agent building is systems design, not software engineering.
| What Developers Focus On | What Agent Builders Focus On |
|---|---|
| Language syntax, frameworks, dependencies | Process mapping, data structure, user workflows |
| Writing functions and classes | Writing prompts, skill files, and context documents |
| Debugging code errors | Debugging bad outputs from missing context |
| Optimizing performance | Optimizing the information the agent can access |
| Shipping features | Shipping outcomes for a specific business |
A Real Agent, Start to Finish
I’ll walk through building OpenClaw, the agent that powers my Knowledge Operating System.
The problem: I had 60+ YAML files tracking contacts, projects, services, daily logs, and brand strategy. I needed a way to ask questions across all of them without opening files by hand.
Step 1: Map the process.
Before I touched a tool, I mapped what the agent needed to do:
- Read any YAML file in the knowledge base
- Search across files by keyword
- List directory contents
- Write and update records
- Remember things across sessions
Step 2: Structure the data.
The agent is useless without structured, consistent data. I built schemas for contacts, projects, services, and content pipeline items. Each file follows the same _meta format with IDs, timestamps, and tags.
knowledge-base/
├── data/ → 60+ YAML records
├── schemas/ → Structure definitions
├── skills/ → Agent instruction sets
├── config/ → Agent configuration
└── chatbot/ → FastAPI + single-page UI
Step 3: Write the skill file.
I wrote a 5,800-word skill file (SKILL.md) that tells the agent how to read, write, query, and maintain every record type. The skill file is the agent’s brain. Claude Code reads it before doing anything.
Step 4: Define the tools.
I described five tools in plain language: read_file, write_file, search_content, list_directory, manage_memory. Claude Code wrote the Python functions. I tested them by asking questions.
Step 5: Test with real queries.
- “Show me all active projects for Quantum Club”
- “Add a meeting with George to today’s log”
- “Search for anything related to technostress”
Each failed query revealed missing context. I fixed the skill file, added examples, tightened the schemas. The agent improved with each round.
An agent gets better when you improve the context it reads, not the code it runs.
Skills That Matter More Than Code
I wrote about this in Why the Best AI Agent Builders Are Not Developers. The short version:
Process mapping beats programming. If you can draw a workflow on a whiteboard, you can describe it to an AI. If you can describe it, you can build an agent around it.
| Skill | Why It Matters | Example |
|---|---|---|
| Process mapping | You need to describe the workflow before the agent can follow it | Drawing the client onboarding flow before automating it |
| Data structuring | Agents read structured data. Messy data produces bad answers | Building YAML schemas with consistent IDs, timestamps, and tags |
| Prompt design | The system prompt determines 80% of agent quality | Writing a 5,800-word skill file that handles edge cases |
| The “And Then What?” test | Forces you to think past the first step | ”Agent sends email.” And then what? “Client replies.” And then what? |
| Domain knowledge | You understand the business problem the agent solves | Knowing that real estate agents need follow-up automation, not more dashboards |
Developers write better functions. Non-developers ask better questions about the business problem. The questions matter more, because a well-structured problem with average code outperforms a poorly understood problem with clean architecture.
Where I Got Stuck
Memory was the hardest part. My first version of OpenClaw forgot everything between sessions. I’d correct it, teach it a preference, and the next conversation started blank. I built a two-tier memory system: core memory (15 entries, loaded on start) and episodic memory (retrieved by relevance, no cap). That fixed it.
Context windows fill up fast. Loading the full knowledge base into a single prompt doesn’t work past 30 files. I had to build tool-based retrieval: the agent searches for what it needs instead of reading everything at once.
Agents break in production, not in testing. My test queries were clean and specific. Real users ask vague questions, use abbreviations, reference things by nickname. I added fuzzy matching and example queries to the skill file to handle the gap.
| Problem | What I Tried First | What Fixed It |
|---|---|---|
| No memory across sessions | Stuffing conversation history into the prompt | Two-tier memory: core + episodic with embeddings |
| Context window overflow | Loading all files on start | Tool-based retrieval, agent searches on demand |
| Vague user queries | Strict input formatting | Fuzzy matching + example queries in the skill file |
| Inconsistent outputs | Longer prompts with more rules | Structured YAML schemas that constrain what the agent can return |
Your Agent vs. a Developer’s Agent
A developer might build the same agent with cleaner code, better error handling, and a more elegant architecture. The output for the end user looks the same.
The business owner asking “show me my pipeline for this week” gets the same answer from my agent and a developer’s agent. The difference is in the codebase, not the result.
I ship agents in days. A developer might take weeks and deliver something more maintainable. For a small business that needs automation now, speed wins. For a company scaling to thousands of users, hire the developer.
Build Your First Agent This Weekend
You need three things:
- A folder of structured documents (YAML, JSON, Markdown, anything with consistent format)
- A skill file that describes what the agent does, how it reads data, and what rules it follows
- Claude Code (or another tool-calling LLM) pointed at that folder
Saturday morning:
- Pick one repetitive task you do at work (answering questions about a project, looking up client info, summarizing meeting notes)
- Write down the steps you follow when you do it by hand
- Structure 5-10 documents the agent will need to read
Saturday afternoon:
- Write a skill file in plain English: “You are an agent that [does X]. You have access to [these files]. When asked about [Y], you should [Z].”
- Open Claude Code, point it at the folder, and ask your first question
- Fix what breaks. Add context where the agent guesses wrong.
Sunday:
- Show it to someone. Ask them to query it. Watch where it fails on questions you didn’t anticipate.
- Update the skill file with those edge cases.
The agent you build this weekend will be rough. The agent you have in two weeks, after fixing edge cases and adding context, will surprise you.
The Systems Before Tools principle applies: document the process, structure the data, define the rules. Then pick the tool. The tool is the last step, not the first.
I’m Shahab Papoon. I build AI agents for businesses through ConnectMyTech and I’ve never written a function from scratch. The skill that matters is knowing what the business needs and structuring that knowledge so an AI can act on it.
Keep Reading
- Why the Best AI Agent Builders Are Not Developers, the case for operational thinking over code
- I Can’t Code. I Made 6 Working Apps in 30 Days., the full product-building journey
- I Built a Chatbot to Query My YAML Knowledge Base, OpenClaw’s architecture and deployment
- Systems Before Tools, the framework behind everything I build