Microsoft just launched a powerful AI prototype that doesn’t just detect malware—it thinks like a security analyst. It reverse engineers files without help, setting a new bar for cybersecurity. This could revolutionize how we fight cyber threats.
In a major leap forward for cybersecurity, Microsoft has unveiled Project Ire, an autonomous AI agent capable of classifying malware at scale—without human intervention. The tool, still in its prototype phase, is designed to analyze and reverse engineer software, determining whether it’s malicious or benign.
This isn’t just another antivirus upgrade. Project Ire is powered by large language models (LLMs) and behaves like a digital security analyst—using sophisticated reasoning, summarizing findings, validating results, and even leaving behind a full chain-of-evidence log for transparency.
Key Takeaways:
- Autonomous Malware Classification: Project Ire reverse-engineers files and identifies threats without human input.
- Built with LLMs and Custom Tools: Combines Microsoft’s AI, memory forensics (Project Freta), and tools like Ghidra and angr.
- Impressive Accuracy: Correctly flagged 90% of test malware with only a 2% false positive rate.
- Scalable and Fast: Designed to detect novel threats at scale—even from unknown file sources.
- Will Power Microsoft Defender: The prototype is being integrated into Microsoft’s own Binary Analyzer tool for live threat detection.

According to Microsoft, Project Ire automates what experts consider the gold standard in malware analysis—full reverse engineering of unknown software, without needing clues about the file’s origin or intent.
The system doesn’t rely on signatures or predefined rules. Instead, it uses decompilers, memory analysis sandboxes, and various reverse engineering tools to deeply inspect a file’s code, structure, and behavior.
The process is methodical and multi-layered:
- It first identifies the file type and interesting structures.
- Then reconstructs the control flow graph (a map of how the code runs).
- Specialized tools analyze key functions and behaviors.
- Finally, a validator double-checks the system’s conclusions and classifies the file.
All of this happens autonomously—and fast. To ensure accountability, the system documents every step it takes, creating an evidence trail that cybersecurity teams can review if needed.
This kind of transparency is rare in AI. Microsoft’s choice to build Project Ire with traceability in mind shows how seriously they’re taking responsible AI, especially in sensitive areas like malware detection.
In initial tests on public Windows driver datasets, the system correctly flagged 90% of malicious files and had a false positive rate of just 2%. In a harder test of nearly 4,000 “hard-target” files, it still maintained strong accuracy, with only a 4% false positive rate.
Microsoft plans to deploy the Project Ire prototype inside its Defender organization, where it will operate under the name Binary Analyzer. It will help security teams classify threats faster, more accurately, and with less manual effort.
A Vision for Scalable Memory-Based Malware Detection
The ultimate goal? To detect brand-new malware directly in memory, across systems, at massive scale. This would allow for near real-time threat detection in enterprise environments—cutting down response times and potentially neutralizing attacks before they spread.
This launch coincides with Microsoft’s 2024–25 vulnerability bounty program results, where the company awarded a record-breaking $17 million to ethical hackers and researchers. One individual even earned $200,000 for a single discovery.
The clear takeaway? Microsoft is doubling down on security—combining human expertise and AI to build a smarter, faster cyber defense system for the future.