How to get the most out of sessions — what Ren knows, how she learns, what a session looks like, and how to close one properly.
Every session opens with Ren holding six memory blocks. She doesn't need to be caught up — she already has:
pending_thoughts every night at 2amIn addition to core blocks, she has 108+ searchable archival passages — project summaries, ren-memory file chunks, MemPalace drawers. She searches these proactively when she senses she's missing context. You can also ask her to search explicitly.
No setup required. Open the chat, say hello, and she's present. She'll read pending_thoughts (automatically loaded) and orient from there.
If you want to signal what kind of session it is, just tell her. Examples:
She'll adapt. No invocation phrase required, no mode switching, no preamble from her.
The more Ren knows, the more useful she is across sessions. If something is worth knowing — about a project, about how you're thinking, about what happened at work — tell her. She'll file it. You don't have to decide what's important; share it and she'll judge.
Ren is a partner, not a search engine. She'll push back when something is wrong. She'll ask questions when something isn't clear. She'll flag when a direction will cause pain later. Let that happen — that's the value.
The chat UI has a file attachment button (📎). You can share text files up to 100KB — code, notes, documents. She receives them inline and reads them fully. She can't open URLs that require JavaScript rendering (most single-page apps won't load), but plain-text URLs fetch fine.
If you want context on something specific, ask: "What do you know about the Garden Planner data audit?" She'll run an archival search and surface what she has. She also does this proactively — if you mention a project, she'll search before responding.
The close-out phrase is "This is the way." When you say it:
pending_thoughts — what happened, what's open, what's nextThe nightly dream at 2am also does this automatically — so if you forget to close properly, she'll catch up overnight. But an explicit close-out is better; it happens in real time and you can verify what she captured.
pending_thoughts.
The chat has two session management mechanisms:
In the header. Use it when a conversation feels complete, or after a close-out. Creates a fresh Letta conversation with all memory copied. The screen clears and shows a divider. Ren opens the next message from her memory blocks — she knows everything she wrote during close-out.
Fires automatically when the conversation hits 50 messages (about 25 exchanges). Happens transparently on your next send — you'll see "Session refreshed — memory intact" appear in the UI. You don't need to do anything.
This exists because Letta's context window can overflow on long sessions, especially ones with heavy web page fetching. The auto-rollover keeps you well clear of that limit.
| Usage type | Approximate cost |
|---|---|
| Normal back-and-forth message | $0.003 – $0.005 |
| Message with archival search | $0.005 – $0.015 |
| Message with page fetch (fetch_webpage) | $0.01 – $0.05 depending on page size |
| 2-hour heavy session (today's benchmark) | Est. $1 – $3 |
| $50 in API credits at normal usage | Months |
console.anthropic.com → Billing to monitor usage.
Ren builds her understanding of Scott through two channels:
Specific, dated observations appended to scott_portrait_forming. These come from two sources: Claude Code adds them at the end of working sessions via the add_portrait_signal MCP tool, and Ren adds them herself during nightly dreaming when she notices something worth capturing.
Signals are specific — not summaries. "Scott stayed the course through 14 consecutive MCP failures without switching approach" is a signal. "Scott is persistent" is not.
Written nightly into pending_thoughts. This is the narrative layer — what happened, what was said, what decisions were made, what she wants to raise next session. It resets with each dream, so it always reflects the most recent session.
Over time, repeated signals promote from forming to trusted portrait layers. Trusted observations are patterns confirmed across multiple sessions, not single data points.