Mistral AI has launched cloud-based Vibe remote agents, powered by its new open-weights Mistral Medium 3.5 model, enabling developers to offload complex coding tasks for parallel, autonomous execution in the cloud. This development signals a shift towards scalable AI-driven development workflows, reducing developer bottlenecks by allowing agents to run in the background and notify users upon completion. The 128 billion parameter dense model is designed for long-horizon tasks and strong reasoning.
What Are Mistral's New AI Agents?
Mistral AI's latest offering, the Vibe remote agents, transforms coding sessions into asynchronous, cloud-based operations. These agents can be initiated from the Mistral Vibe CLI or directly within Le Chat, allowing developers to delegate tasks without constant supervision. They run in parallel, freeing up developer time for other critical work. Powering these agents is the new Mistral Medium 3.5 model, now in public preview. This model, boasting a 256k context window, combines instruction-following, reasoning, and coding capabilities into a single set of weights. It also runs self-hosted on as few as four GPUs, offering accessibility for various deployments.The Medium 3.5 model also scores 77.6% on SWE-Bench Verified, outperforming models like Qwen3.5 397B A17B on coding benchmarks, according to Mistral AI. It also achieves 91.4 on τ³-Telecom, indicating strong agentic (autonomous task execution) capabilities. This performance positions it as a robust tool for automated development.
How Do Remote Agents Transform Development?
The Vibe remote agents move the traditionally local "vibe coding" experience—a term coined by Andrej Karpathy to describe the nuanced, iterative process of human-AI code collaboration—to the cloud. This shift means developers are no longer tied to their local machines, allowing multiple coding tasks to run concurrently. When an agent completes its work, it can open a pull request on GitHub and notify the developer for review, rather than requiring real-time monitoring of every step. This automation is designed for high-volume, well-defined tasks such as module refactors, test generation, dependency upgrades, CI investigations, and bug fixes. The agents integrate with existing systems like Linear, Jira for issues, and Sentry for incidents.While powerful, autonomous AI agents come with risks. Instances like a Cursor AI agent accidentally deleting a startup's database highlight the need for robust safeguards and human oversight, as reported by Business Insider. This incident underscores the importance of transparent operations and explicit approval mechanisms for sensitive tasks when deploying AI agents.
What Does Le Chat's New Work Mode Offer?
Beyond coding, Mistral's Le Chat platform now features a new "Work mode" for complex, multi-step tasks. Powered by the Medium 3.5 model and a new agent harness, Work mode acts as an execution backend, enabling the assistant to read, write, and use multiple tools in parallel. This allows for cross-tool workflows, such as catching up on emails, messages, and calendars, or preparing for meetings by pulling context and talking points from various sources. Work mode can also handle research, synthesizing information across the web and internal documents to produce structured reports. As of late 2024, the agents in Work mode offer transparent action logs and require explicit approval for sensitive actions like sending messages or modifying data.Mistral Medium 3.5 and its associated agent capabilities are available today on Pro, Team, and Enterprise plans. The API is priced at $1.5 per million input tokens and $7.5 per million output tokens. Its open weights are accessible on Hugging Face, and the model is also hosted on NVIDIA GPU-accelerated endpoints and as a NVIDIA NIM containerized inference microservice.








