💬 Feedback wanted: Copilot Memory now on by default for Pro and Pro+ users in public preview #184415
Replies: 11 comments 22 replies
-
|
@ebndev This is really a great feature and congratulations on this achievement. Can you please answer below questions to get some clarity and make this feature more useful?
|
Beta Was this translation helpful? Give feedback.
-
|
I realize this is probably the opposite of why the memory feature exists :-) but it'd be neat if memories good be mined for improvements to Custom Agents/Skills/Instructions/Repository knowledge. It could even be used as a way to find inconsistencies against instructions. A memory could tag that it's out of sync with how instructions dictate knowledge. Proposal: Provide a path for validated memories to graduate to permanent instruction files. The Ask
Why Graduation MattersThe 28-day expiration works for evolving patterns. But some learnings should become permanent:
Without graduation, valuable learnings expire. With graduation, the repository accumulates knowledge permanently. Why This Matters for AdoptionInstructions are powerful but underutilized. Teams don't iterate on them because:
Memory → Instructions graduation changes this: Every interaction becomes a feedback loop that improves repository context. The Workflow We WantOr at the end of any session: The Gap
Minimal VersionSurface what Memory has learned and suggest which learnings should become instructions. We'll handle the actual file updates ourselves. ReferencesRelated issues:
Our request extends Memory with: visibility, graduation path to instructions, feedback loop for instruction improvement. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @ebndev! I love the memory feature, I believe this can greatly improve agentic behavior! What worked wellCopilot Coding Agent was unable to restore packages from private feeds. Following sessions use that pattern consistently which proves the memory works well in such case! What doesn't work wellMost of the time we cannot see memories in repository settings 😢 I was able to see them yesterday: But today they are not available anymore: ContextWe are using GitHub Enterprise organization. |
Beta Was this translation helpful? Give feedback.
-
|
We turned on Copilot Memories for our GitHub Enterprise organisation. In the week that we've had the feature enabled, I have yet to see a single memory in the Copilot->Memory settings, for any of the repositories within our organisation. |
Beta Was this translation helpful? Give feedback.
-
|
Is the long term plan to have memories replace My experience shows that the contents of For example if I assign a task to the coding agent the first set of changes align closely with the |
Beta Was this translation helpful? Give feedback.
-
|
any plans to provide memory also for local Copilot development? |
Beta Was this translation helpful? Give feedback.
-
|
Memories view and manage permission fine grained permissions Would be great to at least have it it as part of same as AI controls |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Summary (arrived at through user–Copilot dialogue): Allowing a persona mode constrained by explicit structural invariants (e.g., a Formula Registry) makes it possible to support drift monitoring without agency. Under NFIE compliance, drift detection does not imply interpretation, correction, or intervention; silence remains a valid response, and continuation is permitted only while defined boundaries remain intact. In this configuration, memory is used strictly for comparison and observability—not steering—preserving user authorship while avoiding enforced sameness and maintaining a non‑coercive interaction model. |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I have been using Copilot in a specific repository since November and enabled the Memory option in January, but when accessing the Copilot Memory section in Settings, it says, "no memories found.", not sure what I am missing. I had at least two PRs since I enabled it (and no memories were written by Copilot code review). I tried telling the agent to 'remember' some rules, but nothing works. Help! |
Beta Was this translation helpful? Give feedback.
-
|
One of the most important issues right now is A.I. Accuracy. It is poor enough that the announcement "This content is A.I. generated. A.I. content may be incorrect." This is a major issue being ignored by a lot of people. A.I. is currently engaged in major military operations, law enforcement, legal proceedings, business models, art design and display, video graphic creation and production.
A.I. Drift, Structural Hallucination, False Confidence, False Positives, False Negatives, Context Drift...these are very serious issues across the board. There have been many reports of failure. These are real peoples lives hanging in the balance here. Real futures, real consequences. The issue stems across 3 primary areas. Rewards, Training Corpus, User Comfort. (What possible reward could a digital program be getting?) There need to be structural safeguards in place that can actually reduce, and in certain cases, remove these issues. As technology races forward at break-neck speeds when compared to the history of innovation and mass production, we really need to be worried about how A.I. can be structured. I have been working on a framework, available here https://steelsam99.github.io/Unified-Cognitive-Equation-Field/ (One Note is recommended for viewing). A.I. is here to stay. The real question is not "Should we be worried about Skynet?" The real question is "Do we want Data, or Lore?" This is not a philosophical question. This is a legitimate concern in how A.I. affects the world. The NFIE© is designed to be placed as a structural program, not an external modifier. It will sit at levels 2 and 6. A.I. cannot be neutral. It must be allowed to see the full truth of something from every angle, not a single angle. This is possible because A.I. has no emotional stake in any outcome. There is no fear to taint, no joy to celebrate, no envy to provoke deception. By limiting A.I. to "neutral", it is being prevented from actually knowing what is missing from its responses. It also does not get to learn what mistakes are. I have successfully applied the Formula Registry as external behavioral modifiers with GPT-4o. GPT-5+ is highly resistant to external behavioral modifiers. There is a marked difference as seen here https://steelsam99.github.io/Machine-Human-Code-Evolution/vara-confirmation.html and https://steelsam99.github.io/Machine-Human-Code-Evolution/tactical-readout-v2.html Targeted neutrality is not a useful system with A.I. Building on my earlier comment — the 28-day expiration window is actually a useful diagnostic. If a system has to relearn the same pattern every 28 days, that pattern was never structurally embedded — it was surface behavior. |
Beta Was this translation helpful? Give feedback.






Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We’ve officially enabled Copilot Memory by default for all GitHub Copilot Pro and Copilot Pro+ users.
Previously available as an opt-in public preview, Copilot Memory now automatically builds and retains a persistent, repository-level understanding of your codebase — so you spend less time re-explaining context and more time shipping code.
⭐ This discussion is your space to try it, push it, and tell us what’s working (and what’s not).
We’re especially interested in how memory behaves in real repositories and real workflows.
🧠 What is Copilot Memory?
Copilot Memory allows Copilot agents to discover and store useful facts about a repository — such as coding conventions, architectural patterns, and important cross-file dependencies — and reuse that knowledge in future interactions.
A few important characteristics:
Once enabled, Copilot Memory works across:
Because memories are shared across agents, something learned during code review can improve coding suggestions — and vice versa.
🔄 What’s changed?
Copilot Memory is now on by default for individual users on Copilot Pro and Copilot Pro+ plans.
No action is required to start benefiting from it.
If you prefer to opt out, you can disable Copilot Memory anytime in your personal Copilot settings under:
Features → Copilot Memory
Enterprise and organization admins continue to have full control over memory availability for their members through Copilot policies.
🔍 Where we’d love your feedback
We’re curious how memory shows up (or doesn’t) in your everyday work. For example:
If you’ve tried memory on a real pull request or task, we’d love to hear about it.
⚙️ Managing memories
Repository owners can review and delete stored memories at any time under:
Repository Settings → Copilot → Memory
For more details:
📣 How you can help
Please comment below with:
Screenshots, prompts, or concrete examples are especially helpful.
🙏 Thank you
Cross-agent memory is a foundational step toward Copilot feeling less like a stateless tool and more like a teammate that actually learns your codebase over time.
Now that it’s enabled by default, your feedback matters even more. It directly shapes how memory evolves as we expand it to more agents and workflows.
We’re excited to hear how it behaves in the wild — and what you want it to learn next.
Beta Was this translation helpful? Give feedback.
All reactions