How I Built a 300-Page internal book about modern health & wellness with a multi-model AI pipeline
And Why “Prompts” Are Not Enough
I’ve wanted to write a book my entire life. But let’s be real: between running Welltory and managing my entrepreneurial ADHD brain, the sheer discipline required for a 300-page manuscript felt like a fever dream.
But then, Welltory shifted gears. We started to see our market and users differently and the team needed to absorb an ocean of new information (I wll tell you more about our new strategy later). I realized we needed an internal book. An “Onboarding Domain Knowledge” to get everyone on the same page.
So, I decided to build it. This is the story of that journey.
Step 1: The Visionary Phase
First, you have to invent the book. AI won’t help you here. You can use it as a sparring partner for dialogue, but ultimately, the outline, the core idea, and the structure are on you.
I’m lucky that our strategy was already well-defined, so this part was relatively easy. I mapped out 1–2 paragraphs per chapter. Later, I realized I’d missed a few things and added extra chapters at the end, but the outline was my foundation. Pro tip: Spend time on the outline. It’s the only thing that keeps the AI from drifting.
Step 2: Information Gathering
Writers usually spend years on research. In the AI era, this phase is unrecognizable.
I spent about $1000 on research alone. Why? Because the level of quality I needed only came from Perplexity’s Deep Research, which costs a lot. I became a “prompt engineer” for research queries, using Claude to help me write the prompts that would extract the best science.
I bought a Google Colab Pro subscription just to keep my sessions alive overnight. I’d launch a massive batch of research and wake up to long Markdown documents filled with links. I then had to process these into “knowledge cards”—individual facts, thoughts, and proofs—which I could then feed into each chapter.
The link nightmare: Models love to hallucinate links or mix up internal file paths with real URLs. I solved this by giving the AI only “fact IDs” during the writing process and then adding the actual sources as a final mechanical layer at the very end.
Step 3: Preparing the Context
My first attempts at context were a disaster. I tried using our company strategy and a style guide based on my social media posts. It didn’t work.
I ended up writing a “Constitution of the Book.” I had to fix our basic worldview and core principles in stone. If you don’t, the model will just hallucinate its own “truth” in any direction it wants. I also split my style guide into two parts: my actual voice and an “Anti-Slop Guide” (how NOT to write like an AI).
Step 4: Learning to Write a Chapter (The “Pre-Chewed Salad” Problem)
This was the biggest struggle. I almost burned out here. I tried approach after approach, and everything I produced was garbage.
I am a perfectionist, yes. But if you want something readable, you can’t just “generate” it. One of our scientists gave me the most brutal feedback on an early draft: “It feels like I’m eating a salad that someone else has already chewed for me.”
The AI had no variability. It was monotonous. It used the same patterns (”Not once, not twice, but three times”) and choppy sentences. AI lacks variability (and I know everything about variability). Living things are variable (that’s why heart rate is variable); AI is just a slave to its instructions. The more I tried to control it with rigid agents (Scientific Editor, Literary Editor, etc.), the worse it got. It couldn’t sound “alive.”
The Solution: I realized that instead of describing how a human writes, I needed to show it.
I initiated a “fake dialogue” with the model. I’d feed it a request and then a “fake” result—which was actually a real, high-quality text from an author I admire (like Sapolski). After three of these examples, I’d give it my real request.
I spent a week building a library of 500+ “example” pairs—real, living texts from real humans matched to specific thoughts and directions. This gave the AI a structural and rhythmic foundation that actually felt human.
Step 5: Generation and Control
If you think you just hit “Enter” and wait, think again. I had to be involved at every step on every chapter.
The Lens: I’d discuss the chapter’s “lens” with a specialized Claude skill—what do we really want to say here?
The Task: Claude Opus acted as my “Scientific Editor.” It would select the research cards and then challenge me. It would ask me tough questions about the chapter’s insights, and I’d have to provide my own feedback and commentary.
The Draft: Only then would GPT-5.4 Pro start writing the base of the chapter using my feedback, all research and the exemplar library. First I try write with more cheap models by chunks but it’s better toi use most expensive models with big context in one step to get coherent result of the chapter.
The Finalization: I’d check the draft, verify the conclusions, and then send it to Gemini Flash for “polishing”—making the language simpler and clearer without changing the substance. Google models are best for this, probably because htey have more data.
Step 6: The Final Edit
Even after all that, I spent days on edits. AI still misses things. It repeats “filler” words, breaks formatting, or messes up abbreviations. I had to generate a custom glossary for every chapter so the technical terms remained consistent but weren’t over-explained.
The best part? Generating the illustrations and seeing the book finally come together. It’s 300 pages of fresh, unique information—the world’s latest scientific knowledge mixed with our specific strategy and market insights.
My “Obsessed Founder” Takeaways:
Examples > Instructions: Just like with people, showing works better than telling. Don’t just prompt; provide examples.
The Context is Everything: The more “obsessive” you are about preparing the context and “Constitution,” the better the result.
Use the Whole Toolbox: I had to use every model on the market to get this right:
GPT-5.4 PRO: For the core writing.
Claude Opus: For the high-level task setting.
Claude Sonnet: For structuring and link processing.
Perplexity: For the deep research.
Gemini Flash: For linguistic polishing.
ChatGPT Image: For the visuals.
The world has changed. Every piece of knowledge on the planet is at your fingertips. The only thing that differentiates us now is our level of perfectionism and having a unique vision that can point the AI in the right direction.
The AI is the engine, but you are still the driver.
I learned so much on this journey. I’m just grateful I stuck with it and delivered something real, rather than giving up and dumping a boring 'research collection' on my team.
This is a brand-new skill set for a brand-new era. It fundamentally shifts our approach to work; things that used to be too slow, too expensive, or simply out of reach are now possible. It’s incredible how much the game has changed.



