I didn’t begin last year with a plan to write books.
I began by teaching.
Specifically, I began by creating short, topical teachings for a men’s group. One idea at a time. Often no more than a page or two. They were written to be read aloud, discussed, challenged, and lived with for a week. The goal wasn’t output. It was clarity in service of real people.
Over time, something shifted. Patterns emerged. Questions repeated. Ideas that initially felt situational proved durable. Those short teachings didn’t just accumulate; they cohered. Eventually, almost without intention, they formed the manuscript that became Faithful and Present, a Field Guide for Husbands and Fathers Living with Intention.
That process forced a reckoning.
As I worked on that manuscript, I began revisiting old material I had accumulated across decades: teaching outlines, leadership notes, project reflections, academic fragments, margin notes from books long finished, and half‑formed ideas captured during seasons of work in technology, education, hospitality, and field leadership. I realized I wasn’t short on ideas. I was short on structure.
What followed wasn’t a burst of inspiration, but a disciplined act of organization. I grouped ideas into broad categories. I identified recurring tensions. I named questions that had never fully resolved. Only then did I introduce AI into the process—not as a writer, but as a collaborator.
With that collaboration in place, I finished my first book by the end of the year. Then another. Then a third soon after. Today, I have eight more books in active development, all moving through the same operating model described in this article.
What changed wasn’t my ability to write. I had been writing and teaching for years. What changed was that I finally stopped improvising and started operating.
This article is not simply a description of a process. It’s an account of how lived experience, accumulated thinking, and disciplined AI collaboration can be translated into finished work—without surrendering authorship, integrity, or voice.
When AI is treated as a disciplined collaborator inside a well-designed system, it becomes a force multiplier for clarity and stewardship rather than a shortcut that erodes authorship.
The core idea: AI belongs in the back of house
For the past fifteen years, I’ve worked in and around hospitality, guest experience design and outdoor hospitality through camping and off‑road guiding. One lesson shows up everywhere in that world: great experiences depend on a disciplined separation between front of house and back of house.
Front of house is what the guest experiences directly: presence, pacing, tone, hospitality. Back of house is where preparation, systems, quality control, and recovery live. When those roles blur, the guest feels it immediately.
I learned this again when launching my own outdoor hospitality business. Guests don’t want to see logistics, contingency plans, or risk management. They want confidence, care, and an experience that feels intentional. That only happens when the back‑of‑house systems are doing their work quietly and well.
Writing operates the same way.
When AI is placed in the front of house, it becomes the visible intelligence of the work. The thinking feels assembled. The tone flattens. The author recedes. Readers may not articulate the problem, but they sense it.
The alternative is back‑of‑house AI.
In this posture, AI supports the work the reader never needs to see: organizing raw notes, surfacing patterns, stress‑testing structure, accelerating revision, enforcing consistency, and managing production details. I remain out front, responsible for voice, judgment, and presence.
This separation isn’t about hiding AI. It’s about respecting roles. Just as good hospitality depends on disciplined boundaries, good authorship depends on knowing where collaboration belongs—and where it does not.
Why an agentic workflow works (and what research suggests)
This way of working did not originate in writing for me. It came from years of system design and team leadership.
Across technology platforms, education programs, and service organizations, I learned that complexity does not respond to heroics. It responds to structure. No serious team asks one person to hold requirements, architecture, implementation, testing, deployment, and operations in their head at once. Roles are separated. Gates are introduced. Feedback loops catch errors early.
Writing is no different.
When planning, drafting, revising, and positioning collapse into a single cognitive mode, quality suffers. Research on cognitive load, writing processes, and deliberate practice supports this, but the lesson is already evident to anyone who has led complex work: mode confusion creates defects.
Four principles guide this model:
- Cognitive load is real. Holding too many modes at once degrades judgment. Separating phases protects thinking.
- Feedback loops outperform raw output. Iteration produces depth and coherence where speed alone does not.
- Specialization improves quality. Distinct roles surface distinct failure modes.
- Gates prevent rework debt. Early discipline reduces later collapse.
The agentic workflow described here is not novel. It is translational. It applies proven systems patterns to authorship, using AI to simulate role specialization where human teams are unavailable.
The BrainSpark to Bookshelf model
Once I recognized that I was no longer writing one book at a time, the need for a repeatable model became unavoidable. Each project began differently, but successful ones moved forward the same way.
At its simplest, the model looks like this:
- BrainSpark (signal capture)
- Concept and positioning
- Structural architecture
- Drafting
- Developmental editing
- Line editing
- Copy and messaging
- Metadata and discoverability
- Production and launch readiness
What follows is not a generic workflow description. It is how each phase shows up in my own work, with explicit roles for AI agents and clear points where human authority intervenes.
Phase 0 — BrainSpark (Signal capture)
This phase usually begins quietly. A teaching that lands harder than expected. A question that refuses resolution. When I revisited decades of notes, this was the work I was doing—listening for signal instead of forcing conclusions.
Agent: Insight Harvester
Human-in-the-loop: I approve the core question and the tension worth carrying forward.
Purpose: Capture raw insight without dilution.
Artifacts: Idea log, core question, tension map.
Gate: Signal outweighs noise.
Example invocation prompt (illustrative, not exact):
You are acting as an Insight Harvester.
Your role is to extract signal from raw notes without turning them into prose.
Identify recurring themes, unresolved tensions, and questions worth carrying forward.
Do not resolve or polish ideas. Surface them honestly.
Output a concise signal list, a core question, and a tension map.
Phase 1 — Concept and positioning
This is where many ideas earn the right to continue—or quietly end. I learned to slow down here after paying for early shortcuts with major rewrites.
Agent: Concept and Positioning Strategist
Human-in-the-loop: I approve the promise, audience, and framing.
Purpose: Decide why this book should exist.
Artifacts: Positioning brief, working title and subtitle, audience definition.
Gate: A clear, defensible promise.
Example invocation prompt (illustrative, not exact):
You are acting as a Concept & Positioning Strategist.
Pressure-test this idea as a book.
Define the central claim, intended audience, and what this book explicitly is not.
Evaluate shelf fit and redundancy risk.
Keep claims narrow and defensible.
Phase 2 — Structural architecture
My systems background shows up most clearly here. I do not write well without structure, and readers sense it when chapters do not do distinct work.
Agent: Architectural Outliner
Human-in-the-loop: I approve the structure before drafting begins.
Purpose: Build the load-bearing frame of the book.
Artifacts: Argument map, chapter architecture, reader journey.
Gate: Every chapter earns its place.
Example invocation prompt (illustrative, not exact):
You are acting as an Architectural Outliner.
Design the argument and chapter structure before drafting.
Map claim progression and reader transformation.
Ensure each chapter performs unique work.
Do not write prose.
Phase 3 — Drafting
When structure is clear, drafting becomes momentum-driven rather than fragile.
Agent: Ghostwriter (working in my voice)
Human-in-the-loop: I review for completeness and intent.
Purpose: Produce a complete manuscript.
Artifacts: Full draft, chapter summaries.
Gate: Readable end to end.
Example invocation prompt (illustrative, not exact):
You are acting as a Ghostwriter working in my established voice for this subject area.
Draft the chapter according to the approved structure.
Separate claims from evidence.
Flag any areas where support or clarification may be required.
Phase 4 — Developmental editing
This is where humility matters most. Weak logic surfaces. Good ideas sharpen—or disappear.
Agent: Developmental Editor
Human-in-the-loop: I approve major revisions and claim integrity.
Purpose: Strengthen coherence and argument.
Artifacts: Edit memo, claim-support matrix, revision plan.
Gate: Structural and intellectual integrity.
Example invocation prompt (illustrative, not exact):
You are acting as a Developmental Editor.
Evaluate coherence, logic, and alignment with the book’s promise.
Identify weak claims, redundancy, and structural drift.
Diagnose; do not rewrite.
Phase 5 — Line editing
Here the book learns to sound like itself.
Agent: Line Editor
Human-in-the-loop: I approve tone and cadence.
Purpose: Increase clarity and precision.
Artifacts: Polished manuscript, style sheet.
Gate: Nothing accidental or inflated.
Example invocation prompt (illustrative, not exact):
You are acting as a Line Editor.
Tighten language for clarity, rhythm, and precision.
Preserve voice.
Remove filler and unnecessary abstraction.
Phase 6 — Copy and messaging
Good books fail when copy misrepresents them. This phase protects alignment.
Agent: Book Copywriter
Human-in-the-loop: I approve all outward-facing language.
Purpose: Invite the right reader.
Artifacts: Back cover copy, Amazon description, cover hook.
Gate: Truthful alignment.
Example invocation prompt (illustrative, not exact):
You are acting as a Book Copywriter.
Translate the substance of the book into an honest invitation.
Attract the right reader and repel the wrong one.
Avoid exaggeration or misrepresentation.
Phase 7 — Metadata and discoverability
Discoverability matters. Integrity matters more.
Agent: Metadata Optimizer
Human-in-the-loop: I approve categorization and keywords.
Purpose: Make the book findable without distortion.
Artifacts: BISAC categories, keyword clusters.
Gate: Discoverability with integrity.
Example invocation prompt (illustrative, not exact):
You are acting as a Metadata Optimizer.
Recommend categories and keywords aligned to genuine reader intent.
Identify mislabeling risks.
Optimize for discovery without distorting the book’s substance.
Phase 8 — Production and launch readiness
This is where the book becomes an artifact.
Agent: Production Advisor
Human-in-the-loop: I sign off on final proofs and publication.
Purpose: Ship a professional book.
Artifacts: Print-ready files, proof checklist, launch plan.
Gate: Ready for readers.
Example invocation prompt (illustrative, not exact):
You are acting as a Production Advisor.
Provide a checklist from manuscript to publish-ready artifact.
Highlight formatting, proofing, and common failure points.
Evidence vs. interpretation
I learned early in my career that good ideas become fragile when they are overextended. Patterns that work in one context are too easily treated as universal truth.
Writing with AI introduces the same temptation. Tools feel powerful. Output arrives quickly. Without discipline, evidence and interpretation blur.
This distinction matters.
Evidence‑supported foundations include cognitive load theory, staged writing processes, and the value of iterative feedback.
Interpretive application includes using AI agents as role‑specialized collaborators and enforcing human approval gates. These are defensible design choices, not settled science.
Implications for future research
While BrainSpark to Bookshelf is presented as a design-informed operating model rather than a tested intervention, it raises several research questions that are worth more serious investigation. These questions sit at the intersection of writing studies, cognitive psychology, human–AI collaboration, and professional practice.
First, comparative workflow studies are needed. Experimental or quasi-experimental research could compare staged, agentic writing workflows against single-pass or minimally structured AI-assisted drafting. Outcome measures might include coherence, argumentative quality, revision depth, author satisfaction, and reader comprehension. This would help distinguish whether the benefits observed here stem from AI involvement specifically or from the reintroduction of disciplined process into self-publishing.
Second, cognitive load and mode separation deserve focused study in AI-mediated contexts. While cognitive load theory supports separating planning, drafting, and revision in human-only writing, little research has examined how AI tools alter or amplify cognitive load. Studies could explore whether role-specialized agents reduce mental overload or simply redistribute it, and under what conditions authors experience genuine cognitive relief versus new forms of complexity.
Third, there is an open question around skill transfer and author development. Longitudinal research could examine whether repeated use of agentic workflows improves an author’s independent capabilities over time or whether skills atrophy as reliance on AI increases. This distinction matters for education, professional writing, and ethical guidance around AI adoption.
Fourth, genre and domain specificity should be examined explicitly. Non-fiction, memoir, academic writing, and narrative-driven leadership books may respond differently to agentic workflows. Research could identify which phases are most critical by genre and where alternative structures are required, rather than assuming a single universal pipeline.
Finally, governance, authorship, and accountability represent an underexplored research frontier. As AI becomes more integrated into knowledge work, clearer models are needed for assigning responsibility, tracing decision authority, and maintaining intellectual integrity. The explicit Human-in-the-Loop gates described in this model could serve as a starting framework for studying accountability in AI-assisted creative production.
Together, these research directions would help move the conversation beyond tool enthusiasm or resistance and toward evidence-informed design principles for responsible human–AI collaboration.
Limitations and cautions for practitioners
This model is powerful, but it is not neutral. Used poorly, it can create as many problems as it solves. The cautions below are drawn from practice, not theory.
1) Don’t skip the human work.
If you outsource judgment, taste, or conviction to agents, the system will still produce words—but the work will hollow out. Human-in-the-Loop gates are not optional. They exist to protect authorship, not slow it down.
2) Avoid premature drafting.
AI makes it easy to draft early and revise endlessly. That is a trap. If concept, positioning, and structure are weak, no amount of drafting will rescue the manuscript. Respect the sequence.
3) Tune agents by domain.
A single prompt set across family formation, business leadership, and academic work will flatten voice and intent. Maintain separate agent families and voice contracts by domain, or coherence will erode across your body of work.
4) Watch for confidence inflation.
AI can produce fluent prose that sounds more certain than the evidence supports. Treat confidence as a signal to verify, not a reason to trust. Explicit evidence checks are essential.
5) Guard against overproduction.
This system can dramatically increase throughput. That is not automatically good. Publishing carries responsibility. Not every idea deserves a book, and not every draft should ship.
6) Expect maintenance, not permanence.
AI systems change. Prompts drift. What works today may degrade tomorrow. Revisit agent definitions, gates, and expectations regularly.
Practitioners who approach this model with restraint, humility, and discipline will find it liberating. Those who treat it as a shortcut will discover that speed amplifies weaknesses as efficiently as it amplifies strengths.
Closing: why this matters beyond publishing
This model is not about producing more books faster.
It is about taking responsibility for what you put into the world.
Across industries, I have learned that systems reveal values. When speed is primary, quality erodes. When output is the goal, meaning thins. But when clarity, coherence, and stewardship lead, systems become amplifiers of wisdom rather than noise.
This workflow allowed me to integrate decades of experience into work that could stand on its own. It let me collaborate with AI without surrendering authorship. It made it possible to serve readers I may never meet—but who still deserve care and rigor.
The point is not to adopt this exact model. The point is to build one that respects your voice, protects your integrity, and acknowledges the limits of your time and attention.
Used well, AI does not replace formation. It reveals it.
An invitation to adapt, not adopt: If this model is useful, don’t copy it wholesale. Design your own responsible variant. Name your values. Define your gates. Decide where human authority must remain non‑negotiable. Let the structure serve your voice, your context, and the people you are trying to serve—not the other way around.
References
Ericsson, K. A., Krampe, R. T., & Tesch‑Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406.
Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition and Communication, 32(4), 365–387.
Hayes, J. R. (2012). Modeling and remodeling writing. Written Communication, 29(3), 369–388.
Kellogg, R. T. (1994). The psychology of writing. Oxford University Press.
Kellogg, R. T. (2008). Training writing skills: A cognitive developmental perspective. Journal of Writing Research, 1(1), 1–26.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285.
Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. Springer.



