Insights from Josh Rosenzweig: How Morgan, Lewis & Bockius LLP Leverages R&D to Navigate AI Adoption
- cosmonauts
- Oct 3, 2025
- 6 min read

Future Lawyer USA is just around the corner, and we had the pleasure of interviewing Josh Rosenzweig, Senior Director of AI and Innovation at Morgan, Lewis & Bockius LLP, who will be speaking on Private Practice Day, October 29th.
Josh drives AI strategy and innovation at Morgan Lewis, developing governance models that align technology investments with client value and advancing digital transformation across the firm. His work ensures innovation initiatives deliver measurable impact and enhance client experiences.
In his industry keynote, “From Experiment to Engine: The Rise of R&D as a Strategic Imperative in Law Firms”, Josh will share insights on how law firms can leverage research and development to turn innovation from isolated experiments into engines of sustainable growth, operational efficiency, and strategic client value.
Enjoy the interview and hope to see you at Future Lawyer USA!
AI is one of the most hyped topics in the industry right now. From your R&D work, what do you see as the most practical, near-term applications of AI inside a law firm — and which areas are still years away from real impact?
The most practical applications of AI today are in productivity and task-level support. These are the areas where lawyers can use AI tools to save time on the more repetitive but necessary parts of their work, such as drafting emails or memos, summarizing transcripts, pulling together research, or triaging a stack of documents.
These types of tasks are often handled individually, and that’s part of why adoption has been so fast—because lawyers can test and integrate AI into their personal workflows without needing to change the entire way a matter is run.
I think this adoption path reflects how many of us first encountered generative AI in our personal lives, using it to write, brainstorm, or research. It’s natural to bring that into our workday, and it’s a low-risk entry point for a profession that is rightfully cautious.
What’s still years away is embedding AI into end-to-end legal workflows. That’s a very different challenge. You can’t just map a workflow on paper and say, “Let’s drop AI into this step.” More often than not, the workflow itself has to be redesigned. In some cases, we even need to ask whether the task in question is necessary at all, or whether it can be redefined in light of what AI can do. In our R&D work, we’ve also seen firsthand how important model selection is. Not all AI models are created equal, and the differences can dramatically affect outcomes when you’re dealing with sensitive, nuanced legal tasks.
Your role blends research with actual innovation projects. What does the process look like for moving an idea from experimentation into something that’s embedded in daily legal practice?
At our firm, we place a lot of emphasis on experimentation and R&D before we scale any new idea. We don’t measure success by how many new tools we roll out each year. Instead, we measure it by how much we’ve improved the way we deliver legal services and how much impact we’ve had on our clients.
The process usually starts with a client. Sometimes it’s a direct request from a client to rethink how we deliver a service. Other times, it’s a collaborative effort to help in-house teams gain more transparency and efficiency, or achieve better interaction with the work we do for them. From there, we test ideas quickly, gather evidence, and ask a simple but tough question: Does this idea truly differentiate us as a firm, or the specific practice we’re working with? If the answer is yes, then it’s worth pushing forward into broader adoption.
We also put structure around experimentation. We treat every idea the way a venture startup would evaluate an early-stage company that has high potential but is still unproven. That mindset helps us stay disciplined. Not every idea makes it to daily practice, and that’s okay. What matters is that we learn from each experiment, and when something shows real potential, we have a clear path to operationalize it, with training, governance, and client engagement built in.
Law firms face unique responsibilities around confidentiality, privilege, and professional standards. How do you balance innovation with the need to manage ethical and regulatory risks when exploring AI?
This is one of the biggest challenges in legal AI adoption, and it’s why we’ve built a full infrastructure to support it. Like many firms, we’ve been using traditional forms of AI for years in areas like our eData practice. But generative AI is different because it creates new ethical, confidentiality, and risk considerations.
Our response has been to put governance at the center. Firm leadership established an AI operational model early, and that set the tone. We have a leadership cohort of senior partners who guide adoption, set standards, and bridge the gap between practice groups and our technology partners. They’re the ones who make sure that what we test in the AI Lab actually lines up with the realities of live client work.
We have also embedded AI Partners in every practice group. That means when we’re thinking about how AI fits into litigation, labor, or investment management, we are not doing it in the abstract. Their role is to make sure our strategy is practical, responsible, and aligned with how each practice creates client value.
All of this is supported by two major structures: our AI Lab and our AI Studio. The Lab is where we test feasibility and run controlled experiments. The Studio is where we handle governance, credentialing, and education. Together, they allow us to explore AI in a way that is innovative but still rooted in professional standards.

How do you engage both partners internally and clients externally in the innovation process, so that solutions aren’t just novel, but truly solve real problems?
For us, innovation has to be collaborative. Internally, we need engaged partners who are excited about changing how they deliver services. Externally, we need clients who are willing to test ideas with us. Without both of those elements, the chance of success drops dramatically.
We also apply discipline to how we test. One of the biggest reasons innovation projects fail is because they drag on endlessly without a clear conclusion. We have solved that by timeboxing our experiments. We typically set a three-week window. In that time, we can collect enough data and feedback to know whether the idea is worth scaling. That structure keeps us focused and prevents us from chasing novelty for novelty’s sake.
In an R&D context, not every experiment leads to adoption. How do you measure success in your innovation initiatives?
Measuring ROI on R&D is tricky for any law firm. Not every experiment will lead to a product or a long-term solution. For us, success is defined by our culture, which is rooted in client service and collaboration.
That means the purpose of our R&D is not just to build things, but to learn. In the AI Lab, our work is very hypothesis-driven: Can AI solve this problem? Is AI even the right tool here? Sometimes the answer is yes, and we build a case for scaling. Sometimes the answer is no, and that’s just as valuable.
In fact, some of the most meaningful outcomes come from sharing those “no” results with clients. Many of our clients are investing in their own legal AI programs, and when we share what worked and what didn’t in our experiments, it helps them decide where to focus their resources. So success for us is about insight, impact, and client value.
AI and data-driven tools are changing the way legal work is done. What new skill sets do you see becoming essential inside the law firm of the future, both for lawyers and allied professionals?
I think curiosity is the most important skill for the future of law. Technical expertise will always matter for the specialists who build and secure these systems, but most lawyers won’t need to know how to code. What they will need is the ability to ask new questions, experiment with new tools, and rethink how work gets done.
The lawyers and professionals who succeed in the AI-enabled law firm of the future will be the ones who approach their work with curiosity and an openness to change. They’ll be willing to test, fail, learn, and try again. They’ll also need to be fluent in responsible AI practices because those will be just as central as technical skills.
If you look ahead five years, what do you hope the impact of your AI and innovation work will be—both for your firm and for the broader legal industry?
Looking five years out, I think we are all being humble about predicting exactly what legal work will look like. But I am certain of two things: AI is real, and it will reshape the profession. The question to be answered is, How?
My hope is that the work we’re doing today, such as educating our people, building governance, and running structured experiments, positions us to adapt confidently no matter how AI evolves.
We’ve invested time in asking the hard questions: What can AI do? What can’t it do? What should it do? By doing that, we’re focusing less on chasing the newest tool of the moment and more on building a foundation that will stand the test of time.
If we get this right, the impact will be twofold. For our firm, it will mean we’re delivering services that are faster, smarter, and more client-aligned, without compromising on ethics or quality. For the broader legal industry, it will mean we’ve shown a model for how law firms can embrace AI responsibly and thoughtfully, while keeping human judgment at the center.
Don’t miss the chance to meet Josh Rosenzweig and other legal experts, hear first-hand about their experiences with AI adoption, and share your own journey.





Comments