As someone who uses AI for everything, I decided to take it a step further and asked ChatGPT to do my performance appraisal. It was surprisingly good. For example, it knew my coding style (my preference for promise instead of async/await) and recommended me to document it for onboarding new employees. No human manager would’ve been this precise. In a few years, AI may understand all our preferences and evolution better than we do. ChatGPT’s implementation of memory allows it to learn from prior conversations, enabling this kind of personal modeling. And that led me to wonder: what if ChatGPT could share just this performance summary with other tools (like peer review systems or mentoring platforms) without exposing everything else it knows about me? It turns out, that’s exactly what OpenAI is building towards. Sam Altman explained in his talk yesterday:
“Young people don’t really make life decisions without asking ChatGPT... it has the full context on every person in their life... the memory thing has been a real change.”
“You should be able to sign in with ChatGPT to other services. Other services should have an incredible SDK to take over the ChatGPT UI... You’ll want to be able to use that in a lot of places.”
Scoped AI Memory Access
- scope:code.read — might allow access to your coding history for developer tools.
- scope:workstyle.summary — might share a personal performance snapshot for review systems.
- scope:shopping.preferences — might let e-commerce platforms tailor recommendations to your taste.
What This Means for Developers
- Your app becomes a node in the user's AI graph: Skip onboarding flows and just tap into scoped AI memory.
- Fine-grained access that users can trust: Users will expect precision access - apps that read only what they need.
- Standardization will open doors: If OpenAI defines an OAuth-like memory scope system, it creates a platform for trusted plugins, apps, and agents.
2 likes
0 comments
Like
Add your comment