Microsoft Copilot Adds AI Memory, Web Actions & Vision — Catching Up with Rivals

Image Credit: Sunrise King | Splash

Microsoft has introduced a wide-ranging update to its AI assistant, Copilot, coinciding with the company’s 50th anniversary. Announced on April 5, 2025, the update incorporates multiple features, including memory retention, personalization options, web-based task execution, podcast generation, visual analysis, and advanced research tools. Built on OpenAI models, Copilot now aligns more closely with competitors like ChatGPT and Claude, reflecting Microsoft’s ongoing efforts to enhance its AI offerings.

[Read More: Top 10 AI Chatbots You Need to Know in 2025]

Memory and Personalization Features Added

A central addition to Copilot is a memory function, allowing the AI to store user details such as preferences, interests, and personal milestones like birthdays. This enables tailored responses and proactive suggestions based on prior interactions. Microsoft states that users can select which information Copilot retains or disable the feature entirely, offering control over data usage.

The company also plans to introduce customization options for Copilot’s appearance, with potential designs including the return of Clippy, the animated assistant from past Microsoft software. These plans remain in early stages, with no set timeline provided. Microsoft AI CEO Mustafa Suleyman described the intent, saying, “Copilot is more than an AI, it’s yours”, emphasizing a focus on individual user experiences.

[Read More: AI Chatbot Hacks Google Chrome’s Password Manager? ChatGPT Vulnerability Exposed]

Web Actions and Shopping Capabilities Introduced

Copilot now includes an Actions feature, enabling it to perform tasks online, such as booking event tickets, making restaurant reservations, or completing purchases. This mirrors functionalities in OpenAI’s Operator agent and Amazon’s Nova Act model. A related shopping tool allows Copilot to research products, compare prices, and identify discounts, with initial integration limited to select platforms like OpenTable and Expedia. Microsoft indicates broader compatibility may develop over time, though specifics remain unclear.

[Read More: Microsoft Refines Copilot Features in Windows 11 Amidst Broader AI Innovations]

Visual Analysis Expands Across Platforms

The update extends Copilot Vision, first launched on the web in December 2024, to Windows and mobile devices. On Windows, the feature can analyze content displayed across applications and files, responding to questions or assisting with tasks. For iOS and Android users, it can interpret images from the phone camera or photo library. The rollout begins with Windows Insiders next week, followed by wider access, while mobile availability starts today. Microsoft aims to integrate this tool seamlessly across its ecosystem, though its performance in diverse scenarios is yet to be fully evaluated.

[Read More: Microsoft’s AI for Science Lab Accelerates Breakthroughs in Drug Discovery & Climate Research]

Research Tools and Audio Output Enhanced

For complex projects, Copilot’s new Deep Research feature can process large volumes of documents and online sources. Integrated with Bing, it provides AI-generated responses within the search engine, aiming to streamline information gathering. Additionally, Copilot can now convert research or user content into podcast-style audio explanations, a capability similar to tools like Google’s NotebookLM. Users can ask questions during playback, adding an interactive element. These features target productivity, with initial versions expected to evolve based on user feedback.

[Read More: ChatGPT Deep Research vs Grok 3 DeepSearch: Which AI Wins?]

New Workspace Feature: Pages

The update includes Pages, a tool that organizes notes, research, and documents into a single workspace. Copilot can assist in structuring content, offering a platform for collaboration or project planning. Designed to simplify information management, its effectiveness will depend on how well it integrates with existing user practices.

[Read More: Thriving, Not Just Surviving: 10 Essential Skills to Outpace AI in the New Era]

Deployment Details and Market Context

Microsoft has started releasing these features as of April 5, 2025, in preliminary forms, with refinements planned over the coming weeks and months. Availability varies by feature, platform, and region, indicating a gradual rollout. While competitors like ChatGPT have offered memory since last year and Google Gemini includes visual analysis, Microsoft’s simultaneous launch of multiple tools highlights its intent to remain competitive. The update leverages its partnership with OpenAI, though it introduces no entirely new concepts to the AI field.

[Read More: Microsoft’s AI-Powered Assistant: Can Copilot Fly Without a Pilot?]

Analysis: Potential and Uncertainties

The update positions Copilot as a more versatile tool, combining memory, visual capabilities, and task automation. Its success will hinge on practical execution—such as the Actions feature’s web compatibility, Vision’s accuracy across contexts, and Deep Research’s efficiency. User control over memory retention addresses privacy considerations, though adoption may depend on trust in data handling. As rival companies advance their AI offerings, Microsoft’s ability to refine these tools and meet user expectations will shape Copilot’s role in the market.

[Read More: Voice Cloning Just in a Few Seconds! Exploring Microsoft's Controversial AI Tool]

License This Article

Source: The Verge

Total per month $1.00
TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Next
Next

Anthropic’s Bold AI Vision: Can Claude Lead a Safer, More Ethical AI Future?