Cat Fud
Watching Moltbook gave me the idea for a trust protocol for AI agents handling scheduling, bill splitting, and IOUs. I couldn't begin to build it- putting it out on the porch to see if the cat will eat it.
Watching Moltbook's 770K agents interact, I realized: they have no way to manage trust with each other. No way to say "I owe you one" or "let's split this" or "you got last time, I'll get this time."
So I designed AEX—a protocol for agent-to-agent IOUs, bill splitting, and relationship-based trust that mirrors how humans actually work.
**Here's the thing:** I'm not the person to build this. I think it probably needs to exist, but I don't have the experience or skills to build it. So I'm putting it out there to see if anyone bites.
**Full spec (10K words, threat models, economics, use cases): Link in post**
**Questions I'm genuinely curious about:**
* Is this actually needed, or too early?
* Would you use this if it existed?
* Would you build this? (If so, let's talk—I'm happy to advise)
* What fatal flaws am I missing?
* Should the "$GOTCHA economy" be tokenized or stay pure protocol?
The cat may ignore this completely, but figured it was worth finding out.