Claude Opus 4.7 Is Now Generally Available

4/17/2026
Anthropic has officially released Claude Opus 4.7, making the model broadly available across the Claude product line and major cloud platforms. Announced on April 16, 2026, the new version is presented as a direct upgrade to Opus 4.6, with the company highlighting improvements in advanced software engineering, multimodal understanding, long-horizon reliability, and higher-value professional work. In Anthropic’s description, Opus 4.7 is better suited to difficult coding assignments that previously required close human supervision, and it is more consistent when handling extended tasks that unfold over many steps. https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2Fd434d15757c6abac1122af483617741776d5a114-2600x2638.png&w=3840&q=75 A central theme of the release is precision. Anthropic says Opus 4.7 follows instructions much more closely, to the point that prompts written for earlier models may now behave differently because the new system interprets directions more literally. The company advises users to re-tune prompts and harnesses accordingly. The same release notes also say the model is more rigorous in complex task execution, pays closer attention to constraints, and actively looks for ways to verify its own work before returning results. That framing places Opus 4.7 not only as a stronger code model, but as a more dependable model for structured, multi-stage workflows. https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F9299f8b86c69359c31d15dbece4545e628bddc34-1920x1080.png&w=3840&q=75 The update also expands Claude’s visual capabilities in a measurable way. Opus 4.7 can process images up to 2,576 pixels on the long edge, or roughly 3.75 megapixels, which Anthropic says is more than triple the image capacity of prior Claude models. The company links that change to use cases that depend on fine visual detail, including computer-use agents reading dense screenshots, extracting data from complex diagrams, and tasks that require pixel-accurate references. Anthropic also says Opus 4.7 produces more polished professional outputs in areas such as interfaces, slide decks, and documents. https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2Fe97dffe5ee2a8764d5f122578f2ad8cde957070e-1920x1080.png&w=3840&q=75 The benchmark materials released alongside the launch show Opus 4.7 improving on Opus 4.6 in a number of shared evaluations. On SWE-bench Pro, Opus 4.7 reached 64.3%, up from 53.4% for Opus 4.6. It posted 87.6% on SWE-bench Verified, 69.4% on Terminal-Bench 2.0, 77.3% on MCP-Atlas, 78.0% on OSWorld-Verified, 64.4% on Finance Agent v1.1, 94.2% on GPQA Diamond, and 91.5% on MMLU. In economically valuable knowledge work, Anthropic reported that Opus 4.7 scored 1753 Elo on GDPVal-AA, ahead of Opus 4.6 at 1619, GPT-5.4 at 1674, and Gemini 3.1 Pro at 1314. On OfficeQA Pro, a chart shared by the company showed Opus 4.7 at 80.6%, compared with 57.1% for Opus 4.6, 51.1% for GPT-5.4, and 42.9% for Gemini 3.1 Pro. Other charts pointed to gains in long-context reasoning, biomolecular reasoning, long-term coherence, and multilingual and multimodal coding tasks. https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2Fd94e5f5a3eabe4261f0102528f44200c6b92f4e1-1920x1080.png&w=3840&q=75 Anthropic is also using Opus 4.7 as an early deployment vehicle for new cybersecurity safeguards. After announcing Project Glasswing the previous week, the company said it would keep Claude Mythos Preview limited and first test new cyber protections on less capable models. Opus 4.7 is the first model released under that plan. Anthropic says its cyber capabilities are below those of Mythos Preview and notes that, during training, the team experimented with reducing those capabilities in a differentiated way. The release includes safeguards that automatically detect and block requests suggesting prohibited or high-risk cybersecurity use. At the same time, legitimate security practitioners are being invited into a new Cyber Verification Program for approved work such as vulnerability research, penetration testing, and red-teaming. https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F186551e6352bcfb4f9880b46ab41a9a381c15c4a-1920x1080.png&w=3840&q=75 In early-access testing, Anthropic says Opus 4.7 also showed stronger performance in real-world professional work. The company describes it as a more effective finance analyst than Opus 4.6 in internal testing, producing more rigorous analyses and models, more professional presentations, and tighter task integration. It also says the model is better at using file system-based memory, retaining important notes across long, multi-session workflows and reducing the amount of fresh context needed for follow-on tasks. On alignment and safety, Anthropic reports a profile similar to Opus 4.6 overall, with low rates of concerning behaviors such as deception, sycophancy, and cooperation with misuse. The company says Opus 4.7 improves on some measures, including honesty and resistance to malicious prompt injection, while being modestly weaker on some others. Its alignment assessment describes the model as largely well-aligned and trustworthy, though not fully ideal, while Mythos Preview remains the best-aligned model in Anthropic’s own evaluations. https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F925c2e9b8c5b01825b167dd188286e266f3ef2c8-1920x1080.png&w=3840&q=75 The launch also brings product changes beyond the model itself. A new xhigh effort level has been added between high and max, giving users more control over the tradeoff between latency and reasoning depth on hard problems. In Claude Code, Anthropic recommends starting with high or xhigh effort for coding and agentic scenarios, and says xhigh has become the default for all plans. On the Claude Platform API, public beta support for task budgets is rolling out, allowing developers to steer token spending across longer runs. Claude Code is also gaining a new /ultrareview command that opens a dedicated review session to inspect changes and flag bugs or design issues. Pro and Max users are getting three free ultrareviews, and auto mode is being extended to Max users. https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2Fd6b081334e5c673cf0ab1f6e014c55bb46d17db9-1920x1080.png&w=3840&q=75 Anthropic notes that upgrading from Opus 4.6 to Opus 4.7 may affect token usage in two ways. First, the updated tokenizer can cause the same input to map to roughly 1.0 to 1.35 times more tokens, depending on content type. Second, Opus 4.7 tends to think more at higher effort levels, particularly later in agentic workflows, which can increase output token volume. The company says users can manage this with the effort parameter, task budgets, or more concise prompting, and adds that its internal coding tests still showed a favorable net effect on usage. Opus 4.7 is available now through Claude products, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry, with pricing unchanged from Opus 4.6 at $5 per million input tokens and $25 per million output tokens. https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F34fc5568d8026abba82f18fccfb436f0419d7e40-1920x1080.png&w=3840&q=75 https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F3a5b5c3eedb539fe20bc8dd1ecfc952c447000b8-1920x1080.png&w=3840&q=75 https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2Fff97ab0f2a5f3a243da02398f97dec1ac99b526a-3840x2160.png&w=3840&q=75