As someone who uses AI for everything, I decided to take it a step further and asked ChatGPT to do my performance appraisal. It was surprisingly good. For example, it knew my coding style (my preference for promise instead of async/await) and recommended me to document it for onboarding new employees. No human manager would’ve been this precise. In a few years, AI may understand all our preferences and evolution better than we do. ChatGPT’s implementation of memory allows it to learn from prior conversations, enabling this kind of personal modeling. And that led me to wonder: what if ChatGPT could share just this performance summary with other tools (like peer review systems or mentoring platforms) without exposing everything else it knows about me? It turns out, that’s exactly what OpenAI is building towards. Sam Altman explained in his talk yesterday:

“Young people don’t really make life decisions without asking ChatGPT... it has the full context on every person in their life... the memory thing has been a real change.”

He also described a future where the consumer product (ChatGPT) and developer product (API) might work seamlessly together:
“You should be able to sign in with ChatGPT to other services. Other services should have an incredible SDK to take over the ChatGPT UI... You’ll want to be able to use that in a lot of places.”




Scoped AI Memory Access

Just like you grant apps access to your email or calendar today, in the future you might grant apps access to specific parts of your AI memory.
  • scope:code.read — might allow access to your coding history for developer tools.
  • scope:workstyle.summary — might share a personal performance snapshot for review systems.
  • scope:shopping.preferences — might let e-commerce platforms tailor recommendations to your taste.
Think of it as oAuth scope for your AI memory. Rather than exposing your entire AI history, you'll authorize scoped access—limited, precise, and purpose-driven. The backend would involve a secure memory store, fine-grained permissions, and a gateway that returns structured insights—not raw chat transcripts. This kind of scoped memory access could become a moat for platforms like OpenAI. The  more memory it accumulates—and the more precise its scope controls—the harder it becomes to replicate or replace. It also enables a new level of privacy-first data sharing at scale.


What This Means for Developers

Today, platforms like Google and Facebook allow you to log in and share profile info (your name, email, calendar) through oAuth. Tomorrow, OpenAI might do the same with your cognitive context: skills, habits, preferences, even behavioral insights.
  1. Your app becomes a node in the user's AI graph: Skip onboarding flows and just tap into scoped AI memory.
  2. Fine-grained access that users can trust: Users will expect precision access - apps that read only what they need.
  3. Standardization will open doors: If OpenAI defines an OAuth-like memory scope system, it creates a platform for trusted plugins, apps, and agents.
May be, deep workflow tools like Cursor will have enough insight to stand of its own. But, most developers won’t compete to build the user graph. They’ll compete to build the best experience layered on top of the AI that already knows the user.
In the past, oAuth APIs let apps access your static profile information such as your name, email, and contacts. Chatbots may soon allow apps to access your evolving digital self: your knowledge, skills, memories, and preferences. That’s not just a better oAuth. This might be the new way, users will share their preferences with other applications and AI tools.