New integration with Midpage embeds trusted legal research sources within AI legal agent Lito; internal benchmark findings shared at Legalweek reveal why purpose-built legal AI remains essential for high-stakes document work
Legalweek 2026 – Litera, a global leader in legal AI technology solutions, announced an integration with Midpage, an AI-powered legal research platform trusted by 200+ law firms, to bring U.S. case law and statutes directly into Lito, Litera’s award-winning AI legal agent. The integration makes Lito the first legal AI assistant to combine advanced generative AI capabilities, deterministic rules-based engines, proprietary firm intelligence, and now Midpage’s industry-leading legal research — all within the Microsoft 365 environment where lawyers already work. In conjunction with the announcement, Litera is sharing new internal benchmark research at Legalweek examining how general-purpose large language models perform on complex legal redlining tasks compared to purpose-built legal comparison technology.
“Every legal AI tool has access to the same foundation models,” said Adam Ryan, Chief Product Officer at Litera. “The difference is what surrounds them. Lito combines the best large language models with our rules-based engines, cutting edge firm intelligence data, and now deep legal research — all integrated where lawyers already work.”
The Midpage integration will deliver U.S. statutes and case law to Lito, adding to this powerful legal drafting environment and further expanding Litera’s ecosystem of more than 60 integrations, including NetDocuments, iManage, Courtroom Insight, and UniCourt. By embedding trusted legal research sources within Lito, Litera continues to deepen the intelligence available directly inside everyday workflows.
Through the Lito chat experience, users can select U.S. statutes or case law as sources to query against a document or a specific legal question. Practical use cases include checking whether an agreement complies with a particular statute, uploading a document alongside relevant legal authority for contextual analysis, or generating a case summary to share with clients — all without leaving Word or Outlook. Lito users on Litera One cloud packages will have access to legal research capabilities through this integration, with options to expand usage through a Midpage subscription.
“Navigating case law has historically been so complex that it was really only done for complex litigation,” said Otto von Zastrow, CEO of Midpage. “AI agents give every attorney the power of a big legal research team. The agent reads hundreds of cases and finds on-point precedents with quotes and hyperlinks. We’re glad to bring this to tools like Lito that already have access to your documents and important context.”
Internal Research Examines AI Performance in Legal Redlining
Alongside this announcement, Litera is sharing findings from internal Quality Engineering research evaluating how different AI approaches perform on complex legal redlining tasks — data that underscores why the architecture behind a legal AI tool matters as much as its capabilities.
The research compared Litera Compare with leading general-purpose large language models —including Gemini 3, Claude Opus 4.5, and ChatGPT 5.2 — across long-form legal documents containing tables, images, embedded objects, headers and footers, and other structural elements. The results illustrate a clear distinction: while large language models excel at research and drafting assistance, generating structured, defensible legal artifacts requires technology purpose-built for legal formatting standards and professional exchange.
Key findings include:
- Structural limitations: General-purpose LLMs were unable to generate usable redlines for non-text elements such as tables, images, embedded objects, headers/footers, and footnotes.
- Accuracy declines with length: Even in short documents, general LLMs topped out at roughly 90% accuracy — a threshold that remains too low for legal work where a single missed change can carry significant consequence. In a 200-page document test, one model’s text accuracy dropped to roughly 40%, with others declining to approximately 70%.
- Description vs. redline: General-purpose LLMs can describe what changed in a document but cannot produce an actual redline or track changes file suitable for exchange with counterparties. Describing a change and delivering the legal artifact that lawyers need are fundamentally different outcomes.
- Completeness over speed: While some models processed comparisons quickly, output reliability and coverage varied significantly across longer, more complex documents.
Litera Compare powers redlining capabilities within Lito, enabling lawyers to produce accurate, industry-standard outputs while remaining embedded in their drafting environment. Together, the Midpage integration and Compare capabilities reflect Litera’s broader approach: combining the intelligence of large language models with the precision of purpose-built legal engines, so lawyers get the best of both where it matters most.
Litera will discuss both the Midpage integration and the research findings at Legalweek, March 9–12, 2026, in New York, NY, as part of broader conversations about how legal AI is evolving beyond experimentation toward measurable, reliable performance.
