An Accountant Presented to a Room Full of Techies About His AI Agent Setup
60 engineers, developers, tech leads — and 2 accountants. Here's what happened when I showed my personal AI agent setup to the AI Innovation Council.
I counted the room. Roughly 60 people. Engineers. Developers. Tech leads. CTOs.
And 2 accountants.
I was one of them. The other one, I assume, was there for moral support. Or curiosity. Or possibly because they got lost on the way to a different meeting.
Either way, there I was. An accountant standing in front of the AI Innovation Council in Charlotte, about to explain how I built and run a personal AI agent.
The Setup
Let me paint the picture. This wasn’t a casual lunch-and-learn. The AI Innovation Council is a serious group. These are people who build AI systems for a living. Who debate model architectures and fine-tuning strategies over coffee. Who have opinions about transformer attention mechanisms.
And I was about to show them my setup. The one I built without a computer science degree. Without a development team. Without venture funding. Just an accountant with a curiosity problem and too much persistence.
The imposter syndrome was real. For about thirty seconds.
Then I remembered: that’s exactly why they invited me.
What I Presented
I walked them through the whole thing, start to finish.
The Architecture
How FRED is set up. The model, the tools, the connections. How an AI agent goes from “fancy chatbot” to “useful collaborator” when you give it the right infrastructure.
I kept it practical. Not theoretical. Not “here’s what’s possible.” Instead: “here’s what I actually built, here’s how it works, here’s what it cost me.”
The API Connections
This is where it got technical, and I’ll be honest — I was nervous about this part. Explaining API integrations to a room full of people who work with APIs every day felt a little like explaining guitar chords to Eric Clapton.
But the questions I got told me something important: they weren’t judging my technical sophistication. They were genuinely curious about the practical application. Which APIs. How they connect. What the data flow looks like. What breaks and how you fix it.
Turns out, the people who build complex systems professionally are fascinated by what a non-developer does with the same tools. Different perspective, different use cases, different problems.
The Use Cases
This was my strongest section. Because while I might not know more than the audience about technology, I definitely know more about what an accountant needs from an AI agent.
Financial analysis. Research compilation. Content creation. Email management. Presentation prep. The full spectrum of what FRED does for me on a daily basis.
I showed real examples. Actual outputs. Before and after. Not sanitized demos — the messy, iterative reality of working with an AI agent.
The Security Focus
This is where the accountant in me really showed up.
I spent a solid chunk of the presentation on security. How I think about data protection. What I do and don’t feed into AI systems. How I handle API keys. What my boundaries are.
In a room full of tech professionals, I expected this to be the section where they tuned out. They know security.
Instead, it was the section with the most questions.
The Questions That Mattered
The Q&A told me more than the presentation did about what people actually care about.
”What does this cost?”
Everyone wants to know about API costs. Not in theory — in practice. Monthly spend. Cost per conversation. Whether it scales linearly or exponentially.
I showed them my actual numbers. Because that’s the accountant move. Don’t theorize about costs — show the invoice.
The number was lower than most expected. That got people’s attention.
”How do you handle the memory problem?”
This question came from multiple people. The AI memory problem is universal, and apparently even the people building AI systems haven’t fully solved it.
I walked through my memory system — the daily logs, the long-term memory file, the search layer. Simple compared to what these engineers could build, but functional. And sometimes functional beats sophisticated.
”How do you think about API security?”
The security questions were the sharpest. Specific. Pointed. From people who understand the attack surfaces.
What happens if your API keys get compromised. How you compartmentalize what the AI can access. Whether you encrypt your memory files. What your threat model looks like.
I had answers for most of them. Not all of them. And being honest about the gaps earned more respect than pretending they didn’t exist.
What I Learned From the Room
Standing in front of 60 technical professionals as a non-technical person is a specific kind of vulnerability. You can’t hide behind jargon. You can’t wave your hands at the hard parts. You have to be straightforward about what you know and what you don’t.
And here’s what surprised me: nobody was dismissive. Nobody scoffed at the non-developer building AI systems. Nobody suggested I was in over my head.
Instead, they leaned in. They asked hard questions because they were genuinely interested, not because they were trying to trip me up. They wanted to understand how someone outside their world approaches the same problems they work on every day.
It turns out that the perspective of a practitioner — someone using AI tools to solve real business problems without a technical background — is genuinely valuable to the people building those tools.
The Bigger Takeaway
I walked into that room thinking I was going to teach some tech people what an accountant can do with AI.
I walked out realizing the exchange went both ways.
Their questions pushed my thinking. Their feedback sharpened my approach. Their security concerns identified gaps I need to close.
I’m even more grateful that the room was full of people who asked hard questions and were open to learning from an accountant.
What You Can Do
If you’re a non-technical professional exploring AI, here’s what I want you to take from this:
You belong in the conversation. AI isn’t just for engineers. The people building AI tools need to hear from the people using them. Your perspective matters.
Start with what you know. I didn’t pretend to be a developer. I presented as an accountant who uses AI. That authenticity connected with the audience more than any technical flexing would have.
Security isn’t optional. Especially if you’re handling financial data, client information, or anything sensitive. Think about security before you build, not after something breaks.
Be honest about the gaps. You don’t know everything. Neither does anyone else in the room. Acknowledging what you don’t know builds trust faster than pretending.
Share your work. The journey from “personal experiment” to “speaking invitation” happened because I wrote about what I was building. Openly, honestly, including the failures. People respond to that.
Two accountants walked into a room full of techies.
At least one of them walked out with a lot more questions to explore — and a lot more confidence that the exploration matters.