Confession: I Broke My Own AI Rule
⏱️Read time: ~2.5 min
You know that moment when you realize you've been giving great advice to everyone else… and then completely ignoring it yourself?
Yeah. That was me the other week.
I teach coaches and consultants how to work with AI, not against it. I talk about adapting your process to fit how the tool actually works. I preach flexibility, testing, and pivoting when something isn't landing.
And then I spent hours banging my head against a wall trying to force ChatGPT to do something it was never designed to do well.
Let me tell you what happened.
The AI Workflow (That Became My Nemesis)
A client needed data extracted from a stack of lengthy and complex PDFs: financial documents, very detailed, not OCR-ready. The extraction rules were fairly straightforward. The output format? A beautifully formatted Word table with bold headers, merged cells, the works.
My client has access to ChatGPT Plus and Microsoft Copilot. Easy, right?
Initial tests looked… okay. But "okay" doesn't cut it when you're dealing with financial data that needs to be client-ready.
So I did what any reasonable person would do: I tried harder.
I converted the PDFs. I tried OCR. I rewrote prompts. Had ChatGPT help me rewrite the prompts. I tried breaking things down further.
And it kept being OK. But never good enough. Why? Because ChatGPT was only reading the summary pages. It wasn't diving into the full document, no matter how I prompted it. No matter how much I broke down the process into small, easy-to-digest chunks. (Even the threat of punishment didn't work 🫣).
And even when the prompt was almost good enough, after countless iterations, the formatted table output I was asking for was fighting against how AI naturally processes and structures data.
For a moment, I was stuck. And honestly? A little embarrassed.
Breakthrough #1 (Thanks, Claude)
Finally, I gave up on ChatGPT and opened Claude. But instead of asking Claude to do the task, I asked a different question:
"How should I restructure this output so YOU can work with it consistently?"
Claude's answer was simple: Stop asking for a formatted table. Use a CSV structure (one cell, one fact), then use a Python script to generate the formatted Word table.
I restructured. I tested. It worked. Beautifully. Consistently. Accurately.
But here's the thing: it still wasn't saving me the time I expected it to save. And if AI isn't saving me time, why use it?
Something was nagging at me.
The Real Breakthrough (Sunday Afternoon Edition)
During our Sunday afternoon walk in nature that week, I had an inkling.
What about NotebookLM?
When we got home, I grabbed my laptop (yes, I broke my "no work on weekends" rule). I opened NotebookLM, loaded the PDFs into a notebook, took the prompt I'd developed earlier, selected all the sources, and hit go.
In about 2-3 minutes, I had what I needed.
No file conversion. No CSV. No Python script. Just upload, prompt, copy, paste. Done.
I sat there thinking: What on earth? This is so simple. Why didn't the other tools do this?
Why NotebookLM Worked (The Technical Bit)
Here's what makes NotebookLM different:
Full indexing: It uploads and fully indexes PDFs (up to 500,000 words per file), creating a dedicated "expert" AI that only references your sources. This minimizes hallucinations and enables accurate queries across the entire document.
RAG retrieval: It retrieves exact chunks dynamically with inline citations, so you can verify sources. No guessing, no skipping sections.
Better preprocessing: It handles complex layouts (tables, multi-columns, images) more reliably than general LLMs like ChatGPT or Claude, which often truncate long PDFs (no matter how well you prompt them).
Context retention: NotebookLM retains full context across sessions and can handle up to 50 sources per notebook, perfect for multi-PDF analysis.
In short: NotebookLM is built for this exact use case. ChatGPT and Claude? Not so much. Or rather, you can get the work done but not as easily.
Key Takeaway
Here's the lesson I had to relearn the hard way:
Sometimes it's not about adapting your process. It's about choosing the right tool in the first place.
I was so focused on making ChatGPT work and then on perfecting the workaround that I forgot to ask: Is there a tool designed specifically for this?
Sometimes, tool selection matters more than perfect prompts or clever workarounds. If you're fighting the tool, you're either using it wrong, or you're using the wrong one.
Your Action Step
Next time you're stuck in a loop with AI (prompts aren't working, outputs are inconsistent, you're spending more time fixing than creating), pause and ask yourself:
"Is there a tool built specifically for this task?"
Don't default to the familiar. Don't settle for "good enough." Explore. Test. Pivot.
You might be surprised by how simple (and fast) the right tool makes everything.
Have fun experimenting,
Elena
P.S. If you've ever caught yourself ignoring your own advice, reply and tell me about it. Misery loves company. 😄

Elena Jaeger
Founder, Future of Work
"AI is the most powerful tool of our time.
It's not here to replace you. It's here to free you, so you can focus on high-impact work, serve your clients better, and finally get your time back."
I help coaches and consultants use AI strategically, without tech overwhelm or losing their human edge.