Show HN: Fixing LLM memory degradation in long coding sessions
github.comLong-session LLM memory degradation (entropy) is the silent killer of complex coding projects. Models like Gemini, GPT-4, and Claude all suffer from it, leading to hallucinations and lost context.
I've developed an open-source protocol that temporarily "fixes" this issue by structuring the dialogue. It's not the final architectural solution, but it’s a proven patch for developers working right now.
Looking for feedback from the community on how we can solve this structurally. https://github.com/robertomisuraca-blip/LLM-Entropy-Fix-Prot...