What the early sessions actually looked like — the failures, the discoveries, and what changed
This module does not present a polished methodology. This module presents the failures that made the methodology necessary. Three months of sessions produced generic output, cold starts, repetition, and a report the analyst couldn't fully explain. Each failure produced a tool. The tools together produced the workflow.
Starting without a system
The first Claude sessions ran in October 2025 on the free tier. No CLAUDE.md. No style sheet. No vocabulary document. No version control. Just a prompt box and a project that had been building for two years — two source theories, a central claim, and a set of ideas that needed a framework to hold them.
The early prompts were broad. "Explain how Wright's Law applies to solar PV costs." "Write an introduction to the Fourth Turning." "Summarize the Seba disruption framework." Claude produced output. The output was accurate. The output was also generic — the kind of summary available in any book review or Wikipedia article. Nothing in the prompt gave Claude the analytical architecture to work from. Nothing told Claude what the project argued. The output reflected the prompt. The prompt reflected the absence of a system.
A prompt without context produces generic output. Not because Claude defaults to mediocrity — because the prompt defines the ceiling. A prompt that names a topic produces topic-level output. A prompt that names a claim, a vocabulary standard, a style rule, and an output format produces analytical output. The system defines the ceiling, not the tool.
The cold-start problem
Every new session started from zero. The previous session's context disappeared when the browser closed. Each prompt re-explained the project: what the Stellaris framework argued, what terminology the framework used, what analytical standards the writing had to meet. Hours of session time went to re-establishing context that should have carried forward automatically.
The problem wasn't the prompts. The problem was the absence of a system for maintaining project context across sessions. A file that Claude reads at the start of every session would have eliminated the cold-start problem entirely. That file didn't exist yet.
Linear sessions and the feedback loop gap
Early sessions ran linearly. A prompt produced a draft. The draft looked complete. The session ended. No gap-checking. No research pass. No return to source material. The feedback loop — Read → Synthesize → Draft → Research → Revise → Discover → return to Read — never ran. Each session addressed one section in isolation and accepted the output without asking whether the analytical architecture held.
The result: output that looked finished but couldn't withstand scrutiny. Claims without supporting data. Synthesis points that didn't connect to the source theories. Arguments that asserted rather than demonstrated. The linear session produced the appearance of progress. The feedback loop produces the substance of progress. The early sessions had no feedback loop.
The repetition symptom
Repetition appeared across the report — the same point showing up in multiple sections, worded differently, approached from different angles, making the same underlying claim. No single session saw the full document. No session caught the repetition because no session read the full document end to end.
The fix required a dedicated prompt: ask Claude to scan the full document and flag every passage that makes the same point as another passage. Claude can locate the repetitions. Only the analyst can decide whether a repeated point reinforces a structural claim or simply reveals that the argument hasn't advanced. That decision requires understanding the argument. The repetition symptom revealed something the linear workflow had concealed: the analyst didn't fully understand the argument the report was making.
The reading gap — and what my granddaughter said
When Claude handled the revisions, skimming replaced reading. Claude produced a revised paragraph. The analyst checked that the paragraph looked right and accepted the output without fully engaging with the analytical content. The report grew longer. The analyst's understanding of the argument didn't grow with the report.
When I told my granddaughter I was using Claude to build the Stellaris report, she said immediately: people who use AI for writing often can't explain what they wrote. She named the problem before I finished building the report. I recognized it in myself partway through — asked to explain a section in conversation, I could describe the argument but struggled to defend the reasoning behind it.
Passive approval of Claude's output produces a report you can describe but not defend. You directed the work. Claude wrote the sentences. The argument belongs to neither of you fully — because the analyst never read closely enough to own the reasoning, and Claude has no stake in the claim. A report you can't defend is a report you commissioned, not a report you built.
The fix turned out to be the most basic mechanic in the workflow: cut and paste. Cutting a passage from Claude's draft and pasting the passage into the document forced reading. The analyst had to understand the section under revision to decide which passage to cut, whether the sentence served the argument, and whether the paragraph earned its place. Passive acceptance of full outputs never enforced that decision. Cut and paste enforced it on every sentence.
Each failure produced a tool
The cold-start problem produced CLAUDE.md. Generic output produced the writing style sheet. File confusion produced version control. Fragmented copy-paste sessions produced Cowork. Linear output that looked finished but couldn't withstand scrutiny produced the feedback loop discipline. None of these tools existed at the start. Each one emerged from a specific failure that made the need visible.
CLAUDE.md — the operating manual
The CLAUDE.md discovery happened while reading Claude's documentation. A file named CLAUDE.md, placed in the project folder, loads automatically at the start of every session. Claude reads the file before responding to the first prompt.
The cold-start problem disappeared immediately. The project's central claim, vocabulary rules, file paths, behavioral standards, and output formats all lived in one document that Claude read without being asked. Each session began as a continuation of the project rather than a re-introduction. The operating manual carries more analytical value than any individual prompt — because the operating manual defines the ceiling that every prompt inherits.
Before CLAUDE.md: every session opened with two to three paragraphs re-explaining the Stellaris framework, the vocabulary rules, and the writing standard. After CLAUDE.md: the first prompt addressed the actual task. The context load moved from the prompt to the file. The session started where the last session ended.
The writing style sheet
Generic output isn't always Claude's failure. Generic output often reflects a missing prose standard. "Write in a professional style" produces office language. Specific rules — no passive voice, no "it" or "its" replaced with the specific noun, subject meets verb within five words, short declarative sentences land the claims — produce analytical prose.
The Master Writing Style Sheet defines those rules explicitly. Every draft passes the checklist before delivery. The checklist doesn't guarantee good writing. The checklist eliminates the constructions that make writing weak — and eliminates them consistently across hundreds of sessions, not just when the analyst remembers to check.
Version control by authorship
File confusion — six documents named "final," "final_v2," "final_ACTUAL" — costs sessions. The fix doesn't need to be formal. A simple rule handles it: Claude increments the whole number when a prompt produces a new draft (v1 → v2). The analyst increments the decimal when making manual revisions (v2 → v2.1). Keep the system loose. The goal isn't a formal audit trail. The goal is knowing immediately which file to hand Claude and which file reflects the analyst's latest thinking.
Claude Cowork
The free Claude Chat tier has no file access and no persistent workspace. Every output required copying from the chat and pasting into a document manually. The workflow fragmented across copy-paste sessions rather than running as a continuous build process. A session that should have taken thirty minutes took ninety because the file management ran manually.
Cowork changed the architecture. Direct file access means Claude reads the actual document rather than a pasted excerpt. The persistent workspace means all project files stay in one place across sessions. Shell tools handle batch operations — finding repetitions, checking word counts, running validation — without manual copying. The session time that went to file management went back to analytical work.
The feedback loop as a practice
The most important discovery wasn't a tool. The most important discovery was that the workflow needed to run as a loop, not a line. Read → Synthesize → Draft → Research → Revise → Discover → return to Read. Each phase produces something the next phase needs. The loop runs repeatedly across the full project — not once per chapter but as the fundamental rhythm of the build.
Running the loop changes what the analyst knows. Returning to source material fills gaps the drafting phase surfaces. Revising against the style standard forces close reading. The Discover phase — finding errors, gaps, and new ideas in the existing draft — generates the research questions that drive the next Read phase. The analyst who completes ten cycles of the loop understands the argument in a way the analyst who ran ten linear sessions never does.
Before and after — the same task, two prompts
The difference between a bad prompt and a good prompt isn't sophistication. The difference is context. The bad prompt names a topic. The good prompt names a claim, a standard, a task, and an output format. Claude produces what the prompt defines.
Result: a generic overview indistinguishable from a book summary. No central claim. No source theory integration. No analytical architecture. Accurate but useless for the project.
Result: a paragraph that went into the published report with one revision. The system defined the ceiling. The prompt addressed the task.
The standard revision prompt
The learning curve produced one prompt structure that now covers most revision sessions. The structure never changes. The content changes every time.
The repetition prompt — a standard step
Once the report exceeded ten thousand words, a dedicated repetition check became a standard step at the end of every major drafting phase.
What the learning curve cost and what it produced
The learning curve cost approximately three months. The first sessions that produced work usable in the final Stellaris report came in January 2026 — three months after the first free-tier experiments in October 2025.
The cost was time. The gain was a methodology. CLAUDE.md emerged from the cold-start problem. The style sheet emerged from generic output. Version control emerged from file confusion. The feedback loop emerged from linear sessions that produced incomplete work. The repetition check emerged from a report whose argument the analyst couldn't follow end to end.
The learning curve is not a tax on using Claude. The learning curve is the process by which the methodology builds itself. Every failure names a gap. Every gap produces a tool. The tools together produce a system that runs the feedback loop sustainably across months of work — and produces a report the analyst can defend, not just describe.
What Module 2 covers
Module 2 examines CLAUDE.md in full — what the operating manual contains, how to build one from scratch for a new research project, and what happens to session quality when the operating manual defines the project precisely. Module 2 includes the actual CLAUDE.md structure from the Stellaris project as a working template.