💠More thinking about 'fact subtypes' for llm memory, making the ontology yet more unclear
- lessons: In situation, x went well / didn't go well, because. These could maybe be attached to episodic memories - message summaries. Or maybe they're already implied and don't need to be explicitly written.
- recipe: Given starting condition, in order to reach target condition, follow these steps. Vary a lot based on environment. The minecraft llm agent used self coding, which is not relevant to most environments?