Spellbook co-founder and CEO Scott Stevenson shares predictions for AI and legaltech in 2025, including the rise of AI colleagues, personalization, and legaltech consolidation.
One of the things I love most about business is making bets about the future. It’s fulfilling to try to see the world a little more clearly than others do. “The market” is one of the only objective scoreboards that can tell you if you’re better at predicting the future than average. And it can reward you handsomely if you’re right about something that everyone else was wrong about. This is what my personal blog Competitive Philosophy is all about.
All of this is to say, we think a lot about the future. Here are my main predictions about AI and legaltech for 2025:
The mainstream proliferation of agents is the most important thing happening in legaltech right now. There have been a lot of murmurs about agents in 2024, and even launches like our Spellbook Associate product. Now performance is crossing a threshold where agents are truly useful and reliable. We think this will drive a new paradigm where we go from having copilots to having true "artificial colleagues" in 2025. Colleagues that can break down complex projects, complete work across documents, check their own work, and correct course.
OpenAI’s release of the full o1 model and their $200/month pro subscription is a sign of what's to come. Rather than "buying copilots", we are going to move to a world where we are "hiring agents" to do human-like work.
The data plateau has been overblown. We’ve invented fire, but the point isn’t to make the fire bigger forever. Now we need to invent steam engines, candles, power plants and a number of other useful things that do not require larger fire. OpenAI’s o1 model has already scored 110 on IQ tests–the biggest question is: How do we harness that intelligence any hook it up to systems to solve problems we care about?
We also don’t need AI models to ingest more and more knowledge, we are instead teaching them how to look up the knowledge they need at compute time. This is often much more effective and is better for enabling citations.
OpenAI co-founder Ilya Sutskever agrees that pre-training as we know it will end as data gets sparse, but that compute-time improvements will continue.
Advancements in AI are going to enable products to tune themselves to an individual or firm's preferences, automatically learning over time. The era where AI models digested everyone's information and spit out average "AI slop" will be seen as primitive. Our models will be personalized extensions of ourselves.
One of our biggest learnings at Spellbook this year is that there is no way to be "correct" in an AI contract review. It's fundamentally a preference problem, which depends on the party's preferences and risk tolerance. Systems will evolve to be more like YouTube and TikTok recommendation algorithms. We have already launched "Preference Learning" in beta in our AI contract review system.
In the past month, we have heard of three early-stage legal AI startups shutting down. The market has matured extremely quickly with a lot of new entrants, some of which struggled to get their foothold. It is becoming difficult for new entrants to catch up. At Spellbook we are looking at making micro-acquisitions next year.
AGI stands for “Artificial General Intelligence,” meaning AI that can master many different tasks and domains with human-like versatility. Many people see this as the holy grail of AI, but are reluctant to say we've met the bar.
I believe the seed of AGI arrived 2 years ago with ChatGPT, and that AI systems will be seen as meeting the bar for AGI in 2025. At Spellbook, we consistently see how we can produce human-like results by combining existing models with the right prompts, APIs and data structures. Planning, execution and course-correction were the last hurdles to overcome, but o1 and agent frameworks are solving these problems. Agents are the worst they’ll ever be today, we think they will improve rapidly through engineering effort. There will be new R&D breakthroughs, but we don't think they are strictly necessary to achieve AGI.
OpenAI's recently previewed o3 model has an estimated IQ of 157, and has scored 85% on the ARC-AGI benchmark.
This one’s a little more esoteric, but I think it's one of the most important things bubbling up in the background. Human-like AI will force a debate: Is AI conscious? Do we have an ethical responsibility to treat AI well? What does it mean to be conscious, anyway? Why would we be conscious and AI not be conscious?
AI will cause more people to face the reality that we’ve still not solved The Hard Problem of Consciousness, and that we may be missing something deeply fundamental in our understanding of reality.
Philosophers like Bernardo Kastrup are publishing papers which solve these problems rationally and empirically, with one twist: they propose abandoning materialism all together (the idea that reality is fundamentally made of matter) in favor of Idealism (where mind comes before matter). These ideas are garnering surprising traction in both philosophical and scientific circles, as a better way to explain how consciousness fits into the world and AI.
Kastrup’s Why Materialism is Baloney and The Idea of the World are the most impactful books I’ve read in the past decade.
One thing I feel like we know for sure is that AI progress will accelerate again in 2025, and it is very difficult for anyone to know exactly where that will take us. I’m looking forward to reflecting on these bets at the end of 2025.
For now, happy holidays & happy new year!
You can unsubscribe at any time. Read our Privacy Policy for more.
Thank you for your interest!
Thank you for your interest! We are currently only onboarding legal professionals.