top of page

Designing for Intelligence: Turning GenAI’s Bugs into Features

  • Writer: Jeff Hulett
    Jeff Hulett
  • Jun 15
  • 5 min read
ree

A Power User’s Guide to Transforming Limitations into Lasting Value


Jeff Hulett is a behavioral economist and veteran banker with decades of experience leading teams using sophisticated modeling techniques—from linear regression and decision trees to neural networks powered by machine learning. As the founder of Personal Finance Reimagined (PFR), he combines financial expertise, decision science, and technology to help individuals and institutions make better long-term choices in a world driven by uncertainty and AI-enabled opportunity. Jeff also leads the research platform The Curiosity Vine (TCV), he now focuses on using GenAI and behavioral frameworks to empower long-term, values-aligned decision-making across finance, education, and entrepreneurship.


Understanding the Problem: Catastrophic Forgetting


In the accelerating world of generative artificial intelligence (GenAI), most users marvel at what these systems can do. But few understand what they cannot do. One of the most misunderstood limitations is something called catastrophic forgetting (CF)—a term that sounds dramatic because it is.


CF is a known issue in neural networks where new information overwrites previously learned knowledge, especially when the system is trained sequentially. Unlike humans, who possess robust long-term memory systems, GenAI models are limited in how they retain and apply learned context—unless that context is reintroduced manually or systematically.

In systems like ChatGPT, which aren’t updating in real time, CF doesn’t show up as data corruption. Instead, it surfaces through:

  • Short-term memory decay within long sessions

  • Loss of prior context across separate interactions

  • Inconsistent tone, logic, or frameworks when foundational knowledge isn’t referenced


In short, GenAI forgets unless you design around its constraints.


Why CF Matters for Serious Users


For casual users—generating ideas, summarizing content, or exploring curiosity—CF is not a major barrier. But for professionals, educators, entrepreneurs, and content creators who use GenAI for cumulative intellectual work, catastrophic forgetting can be the silent killer of consistency and coherence.


Imagine having to retrain a new assistant every time you collaborate. That is what relying on GenAI without strategic structure feels like. The good news: if you build smartly, you can turn this bug into a feature—forcing greater clarity, intentionality, and intellectual discipline into your workflows.


How I Overcome Catastrophic Forgetting: Core Practices


1. Create External Memory Anchors


Since GenAI can’t remember across sessions unless designed to do so, you must externalize memory. For me, two digital platforms—Personal Finance Reimagined (PFR) and The Curiosity Vine (TCV)—serve as my primary memory anchors. These platforms not only host my published work but act as permanent knowledge bases that I can repeatedly reference, build upon, and evolve.


To operationalize this:

  • Maintain a searchable AI journal or prompt log in Notion or Google Docs

  • Use published articles from PFR and TCV as canonical references in prompts e.g., “Use the logic from my article ‘The Hidden Wealth of Time’ on PFR as the foundation for this next piece.”

  • Capture and tag reusable frameworks or metaphors e.g., “Decision FIRST,” “dopamine vs. acetylcholine,” “Primacy of Use test”

These platforms aren’t just for publishing—they are my long-term GenAI memory layer. Every article becomes a modular block in a living system of insight.

2. Standardize Your Voice with a Rules Inventory (RI)


I developed a Rules Inventory—a living prompt that governs:

  • Tone, style, and formatting

  • Citation standards

  • Paragraph structure

  • Professional and/or academic framing e.g., Decision science, behavioral psychology, behavioral economics, etc

By storing the RI in memory and referencing it regularly, I’ve created a consistent authorial voice across hundreds of articles. It functions like a style guide for my AI collaborator.


3. Use Prompt Chaining and Backward Linking


Most people prompt forward: “Summarize this topic.” But high-impact use comes from prompting backward:

  • “Recall our framework on dopamine and motivation—how does it apply to this piece on opportunity inequality?”

  • “Which arguments from my zoning reform article should be reused or updated here?”

This nudges the model into behaving like a recursive thinker—not just a text generator.


4. Turn Sessions into Projects


Structure beats spontaneity. Treat each session as part of a larger body of work:

  • Title sessions clearly: “College ROI – Draft 3”

  • Begin with a re-anchor: “We’re continuing from yesterday’s zoning framework conversation.”

  • End with a note to build from next time

You’re not just writing. You’re engineering continuity.

Advanced Strategies to Cement Long-Term Value


5. Use Repetition with Variation (Echo Loops)


In human learning, spaced repetition and rephrasing reinforce memory. You can simulate this with GenAI:

  • Reuse frameworks in different formats: article → tweet thread → one-slide visual

  • Ask for a restatement of core ideas to strengthen their recall

Repetition with variation mimics how we consolidate long-term knowledge—now applied to your GenAI workflow.

6. Leverage Intelligent Redundancy with Framework Tagging


Frameworks like my “Investment Barbell Strategy” or “Decision FIRST” aren’t just tools—they’re semantic anchors:

  • Use explicit tags in prompts: “Apply the Decision FIRST framework to this job-change scenario.”

  • Create a glossary of your frameworks to reinforce naming conventions

What gets named gets remembered.

7. Embed Human Judgment Loops


Because GenAI is probabilistic, not deterministic, human oversight must drive final outcomes:

  • Schedule a review cycle: “Are we still aligned with the Rules Inventory?”

  • Use a premortem approach: “If this article misses the mark, why would that be?”

  • Turn feedback into calibration prompts

The best users aren’t passive. They coach the model as they go.

8. Use Multi-Session Templates for Recurring Projects


Much of my work follows repeated structures. I’ve built multi-step prompt templates for:

  • Articles

  • Interviews

  • Teaching modules


Example 4-Step Template:

  1. Ideation and angle selection

  2. Structural outline and logical flow

  3. Full draft with voice and references

  4. Review, refinement, and derivative assets


Systematize creativity. That’s how CF becomes irrelevant.

Final Thought: Partnering with Forgetfulness


Catastrophic forgetting isn’t a flaw to resent—it’s a design constraint that sharpens the serious user. If you apply structure, develop reusable assets, and take ownership of memory and voice, GenAI becomes more than a tool—it becomes a co-creator that elevates your productivity and expands your cognitive edge.


In a world of data abundance and short-term memory, your ability to design for intelligence is what separates noise from value.


Resources for the Curious


For readers who want to explore the deeper mechanics of memory, intelligent system design, and how to build a lasting partnership with GenAI, the following resources offer foundational insights from behavioral economics, AI research, and decision science:


  1. Hulett, Jeff. The Hidden Wealth of Time: Turning Challenges into Opportunity. Personal Finance Reimagined, 2025.

    Introduces a growth-based decision model anchored in time investment, causal emergence, and comparative advantage—key for understanding adaptive learning frameworks.

  2. Bennett, Max. A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. Mariner Books, 2023.

    Traces the evolution of human intelligence and draws compelling parallels to artificial systems. Bennett gives particular focus to catastrophic forgetting as a major unsolved problem in GenAI—and why solving it may be the next evolutionary leap in machine cognition.

  3. Kirkpatrick, James, et al. “Overcoming catastrophic forgetting in neural networks.” Proceedings of the National Academy of Sciences, 114(13), 2017, pp. 3521–3526.

    Introduces Elastic Weight Consolidation (EWC), a seminal approach to solving catastrophic forgetting in machine learning.

  4. Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.

    Explores dual-system reasoning—intuitive vs. deliberative thinking—and its application in AI alignment and human prompt design.

  5. Marcus, Gary, and Ernest Davis. Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon Books, 2019.

    Critiques neural network limitations and proposes the integration of symbolic reasoning to build trustworthy, memory-aware systems.

  6. Brown, Tom B., et al. “Language models are few-shot learners.” Advances in Neural Information Processing Systems, 33, 2020.

    Introduced GPT-3, illustrating both the potential and contextual fragility of large language models.

  7. Bengio, Yoshua. “The Consciousness Prior.” arXiv preprint arXiv:1709.08568, 2017.

    Proposes an architecture that mimics human focus to reduce distraction and forgetting in artificial systems.

  8. Yudkowsky, Eliezer. Rationality: From AI to Zombies. Machine Intelligence Research Institute, 2015.

    A foundational text on rational thinking, bias mitigation, and the future of AI-human interaction.

  9. Anderson, John R. Cognitive Psychology and Its Implications. Worth Publishers, 2014.

Explains how human memory, attention, and information retrieval function—crucial for designing AI analogs and decision scaffolding.

Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.

The definitive academic reference on the structure and training of neural networks, including challenges like overfitting and memory loss.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page