Law has one of the strongest "unions" in the form of the Bar Association, backed by legal force. You cannot practice law without "passing they bar" as they say. The lawyers who operate the Bar can just decide they wont be replaced and then they wont, AI will remain a tool used by human lawyers.
Given they have the power of life and death in their hands having them licensed and accountable is peace of mind.
Surely unions are too powerful in several industries. Police, medicine, and law. But not having some association holding these people accountable is a bad idea.
Most of these industry guilds tend to be capricious but forgiving, determined to protect members. Almost every (North American) union puts its members' well-being ahead of any possible accountability, which makes sense, but means they cannot be trusted to self-regulate.
Just like AI will kill all the software developers...
Obviously AI will change the legal industry. But a lawyer will still have an advantage because they know what questions to ask and can provide the AI with the relevant context.
Recently I asked Claude if I should convert my LLC to an S Corp for tax savings, and it sang the praises of how much I’d save if I did this.
When asked my accoutant, he pointed out that since I live in NYC, the S corp would be taxed in such a way that would completely wipe out the tax advantage I’d get elsewhere, and I’d likely end up paying more if I did this.
Its a funny take because this reddit thread seems to suggest the opposite. Pro se litigants (people representing themselves) are using LLMs to create more lawsuits resulting in more work for lawyers.
It has indeed killed several careers at this point, but not because it was better than a human, but because a lazy human used it and didn't check their work.
This article talks about martinis about as much as it talks about the careers of lawyers being threatened by AI. The article provides no real justification for its claims outside of anecdotal opinions. The only value of this article is that it results in a discussion in the comments section that provide the actual credence to the claims.
Like so many of these articles about how "AI will/won't do X" it just feels like everyone is speculating.
The only thing I feel confident about is that people are bad at predicting the future. Why can't we just wait and see without all this overconfident guessing?
Security itself is a journey, not a destination. To say that you are secure is to say that you have been so clever that nobody else in the history of ever again will ever be as clever as you just were. Even knowing that they can study you being clever.
Even a super intelligent AI might not be able to replace lawyerhood unless it is also dynamically going out into the world and investigating new legal theory, researching old legal theory, socializing with the powers that be to ensure that they accept their approach, and carefully curating clients that can take advantage of the results.
Very much doubt, what you’ll see is it killing off paralegal work.
In most jurisdictions legal advice is a regulated and restricted activity. Qualified lawyers today get themselves into trouble without AI advising on areas they have no right to practice in.
> I mention the problem of ‘hallucinations’ – when an AI model presents false or fabricated information as factual – and the need for a human face in court. The Sandie Peggie judgment allegedly contained AI-made errors. He waves this all away. ‘Temporary bugs and sentimental preferences. The economic argument is overwhelming.’
As usual with "AI replacing humans", the key thing to consider here is accountability.
I want to get my legal advice from someone who is accountable for that advice, and is willing to stake their professional reputation on that advice being correct.
An LLM can never be accountable.
I don't want an LLM for a lawyer. I want a lawyer armed with LLMs, who's more effective than the previous generations of lawyers.
(I'd also like them to be cheaper because they take less time to solve my problems, but I would hope that means they can take on more clients and maintain a healthy income that way even as each client takes less time.)
The closing paragraph of that story:
> ‘My niece is a lovely girl, really smart, great at school, and the other day she told me she wants to be a lawyer. And I thought, “Oh my God, my little niece wants to be a lawyer”, and I flat out told her. I said please do not destroy your life. Do not get into a lifetime of debt for a job that won’t exist in ten years. Or less.’
Uh oh. Here we go again, with the "don't bother studying computer science, it's 2002, all the jobs will be outsourced to cheaper countries in the next few years!". So glad I didn't listen to that advice back then!
> I want a lawyer armed with LLMs, who's more effective than the previous generations of lawyers.
From what we've seen thus far, there's a non-zero chance the lawyer armed with LLMs will submit a brief generated by said LLM without reviewing it, which makes the judge none too happy.
Look at how people handle bringing their cell phones with them while driving. Some people won't use it at all. Some will play music (unrelated to driving but overall neutral as long as they aren't fiddling with it). Some will use it for GPS driving assistance (net positive). But, many will irresponsibly use it for texting/talking while driving, which is at least as bad as being inebriated and can lead to harming themselves and others.
Don't expect people to be any more or less responsible with LLMs.
There are some promising AI-driven tools these days that use search against archives of cases to help check that citations aren't garbage. I'm hoping lawyers start using them to help pick apart each other's laziness.
> I want my lawyer armed with LLMs to not do that.
The only way to guarantee that is to have a lawyer not armed with LLMs.
We've seen dozens of examples already of lawyers doing exactly that. (Some of them have then doubled down in court, to their eventual detriment.)
If you're making a habit of using LLMs to draft briefs for you, how long before you just forget to check the cited cases to replace the hallucinated ones with real ones? Or decide not to check, because surely they'll be fine this time...only they're not?
When I was thinking about law school the big panic was about e-discovery tools: we wouldn’t need many lawyers anymore since we didn’t need to rifle through boxes of physical paper anymore!
What happened instead was that, with the burden of collecting documents significantly reduced, we were able to start looking for needles in much bigger haystacks.
Actually, that’s the high-value model. Imagine you have a bunch of LLMs tuned to different sensibilities that match great jurists, Oliver Wendell Holmes Jr., Learned Hand, maybe Aristotle to mix things up, maybe a real jurist. And your attorney tunes their arguments to be persuasive to whatever model they believe is dominant.
It’s a short leap to comparing model scores to determine a quick and dirty settlement “winner” which really isn’t that far from manual processes.
Lawyering will look different, but there definitely will be lawyers. Judging on the other hand…. Judging is the one I wonder about.
I imagine it would involve 1000s of LLMs outputting a judgement and then if there were significant disparities it would get flagged in some manner.
That's actually the plot of Minority Report, a lot of people think it is about "what if computers could predict crime" but it is really about "What do you do when your 'omniscient' machines disagree with each other".
Either way the idea of getting sent to prison and having 0 human interaction is terrifying.
That is something I hadn’t even considered. That is super scary; Part of me thinks it’s inevitable. People famously lack any sort of empathy for the falsely accused until it happens to them, so why wouldn’t they vote for a “save the children: use AI judges!” bill in 10 years?
Not sure it will replace them, but a tool that allows folks to have a better understanding of the legal system and how to navigate it will certainly impact the existing power systems (of which lawyers are a part).
That sounds just like the argument used to replace programmers and we see what kind of hell that's causing.f
Which is not to say you're wrong, but maybe we should look at ways of making the transition better, easier and less stressful. Perhaps actually giving people a choice, rather than having technocrats ram it down our throats.
I think that choice is already happening in a way that is as natural as we're going to get. I found the recent legal business with Perez Hilton kind of interesting. Take this passage from this story https://www.cjr.org/feature/perez-hilton-og-original-news-in...:
> Still, there was a problem. Hilton’s insurance would not foot the bill for a lawyer to defend against a subpoena. He would have to cover his legal costs out of pocket. Instead of finding an attorney, he did two things legal experts always advise against: he decided to represent himself and to use ChatGPT to help draft his legal briefs. At first, this did not go smoothly. An early filing written by ChatGPT, which Hilton nicknamed Dad, invented several legal references. “There’s this phenomenon called ghost law,” Hilton said. “They make up citations, they make up anything.” After a set of embarrassing errors was called out on social media, Hilton started double- and triple-checking every citation, and asked ChatGPT to review its own output. The process went more smoothly from there—so much so that Hilton came to see AI as a great legal leveler. “Now that I know that I can so effectively use ChatGPT, I’m not going to be paying a lawyer unless it’s absolutely necessary,” he told me.
The (imperfect) tools gave him the ability to keep his case alive, and he was eventually taken up by the ACLU. While Hilton is far from a sympathetic underdog, the levelling effect is pretty compelling.
Someone publishes this story every 3 months. One more gullible senior attorney getting on the hype train is not news. LLMs are very, very, very good at making words look pretty, which has always been a cherished talent that lawyers liked to think only they possessed. But even before LLMs, you wouldn’t pay a lawyer much if all you needed them to do was to write a brief with no investigation, discovery, or motion practice involved. Grok’s output looks like great lawyering because it’s the product of great lawyering - great lawyering which enabled this source to spoon-feed the facts and the law to Grok, which makes this more like a law school writing assignment than actual legal work.
We're at a funny stage where some careers are becoming "post-LLM". For example, SWE is either rapidly approaching, or surpassing the point where LLMs can do most of what we traditionally viewed as day to day SWE work. However this doesn't translate into "no more SWEs." I have no doubt that what it means to be a lawyer day to day will shift with LLM advancements.
I've used Grok for legal work (falsely accused, and also in a divorce) and it is very good. I've used ChatGPT also and it is not bad, but not as good as Grok. This is just my own personal experience but I suspect others who have decided to try Grok end up sticking with it.
I really doubt this for one reason. lawyers makes the laws. Already we have seen massive push-back in legal areas where some lawyers were punished for using AI.
Once it looks like their profession is threatened, you will see many laws against AI.
I thought decades ago people found a way to avoid lawyers is a specific instance, I kind of remember doing that was made against the law. Not sure if I am remembering right, but I could swear that happened.
Edit: Reading the comments, I think it was the bar exam. IIRC there was a time you could take it without a degree, that was changed to force people to go to to college and get a degree.
Allow me to explain a contrarian position. Judges favor individuals that use an expensive lawyer for representation, even if there isn't much of a legal argument to be made. Judges give such individuals a far better deal. The reason for this is that hiring an expensive lawyer shows that you've paid homage to the legal profession with your wallet, that you support the systemic judicial-attorney-penalty complex. It grants you favors.
If now you were to come forward with an AI lawyer, in practice it'll be almost as if you didn't use a lawyer at all, as if you were representing yourself, which will get you the worst possible deal, if any. Things shouldn't be this way at all, but the system is crooked, and so they are this way.
As such, I think some lawyers are going away, but not all. The ones who stand in court will have business.
> Allow me to explain a contrarian position. Judges favor individuals that use an expensive lawyer for representation, even if there isn't much of a legal argument to be made. Judges give such individuals a far better deal. The reason for this is that hiring an expensive lawyer shows that you've paid homage to the legal profession with your wallet, that you support the systemic judicial-attorney-penalty complex. It grants you favors.
This isn't why expensive lawyers tend to get better results in court, or why those who represent themselves often end up screwed. I'm against the legal monopoly system, but this is out-of-touch and silly.
Expensive lawyers can get better deals in court even for run-of-the-mill cases. Why is this? Are cheaper lawyers so dumb that they can't even handle common cases?
Expensive lawyers have better relationships with opposing parties, have more and better legal research both already on hand and available to be done, with more and smarter people doing research, with more and more experienced people available to consult, and may hire outside consultants when the situation calls for it.
They can also better afford to play dirty in various ways, from burying you in discovery documents to dragging things out with various motions.
And in general, yes — they often also have at least slightly smarter lawyers (and more eyes on the case). That doesn't mean cheaper lawyers are dumb, and there are smart lawyers out there who aren't incredibly expensive, but the average intelligence goes up noticeably as you interact with more expensive firms (which tracks, because they hired people with the top performance in and possibly after law school).
AI may close the gap on some things, but not others.
FYI, even lawyers who represent themselves generally don't do well, no matter how smart they are or how much experience they have in that area. But that's not because the judge wants them to pay into the cartel -- it's because law is hard, there are a million factors affecting performance on a particular case, and one major factor is ability to keep perspective on your case. People are uniformly terrible at this when looking at their own cases.
Cheaper lawyers can’t afford to pay for as many research librarians, paralegals, junior attorneys, writing consultants, jury consultants, etc. LLMs may level the playing field in this regard. But of course, the expensive lawyer might be able to pay for more tokens.
> ‘My niece is a lovely girl, really smart, great at school, and the other day she told me she wants to be a lawyer. And I thought, “Oh my God, my little niece wants to be a lawyer”, and I flat out told her. I said please do not destroy your life. Do not get into a lifetime of debt for a job that won’t exist in ten years. Or less.’
Bets that this won't happen in just 10 years?
DARPA Grand Challenge took 20 years, and it's still not on the interstates. Waymo is amazing, but it's still a work in progress.
I know it's coming, but solving problems that require 99.999% correctness is hard work. Mistakes multiply.
A toy can be ready tomorrow, but a precision legal tool needs to be better than humans. Not unlike driving 70 mph on the interstate highway with hands off the wheel.
Lawyer has been a poor earning profession (for the education needed) for a while now. There are a few expensive lawyers who make a lot, but your typical lawyer is not highly paid. This varies by state/country of course, but in general I don't advise going into law because it isn't worth the costs to get in for most. (though just like art, music... there are some exceptions)
A person let's their case be argued by ChatGPT Esq. At sentencing:
"Your honor, the death penalty for a traffic ticket?"
AI will never replace humans in this capacity. Lawyers may be scummy but most people would take a slimeball lawyer over a hallucinating, sycophantic "AI" pretending to be both a human and a lawyer. This reads more like astroturfing by Sam Altman to keep the ChatGPT hype going while he cashes out.
Law has one of the strongest "unions" in the form of the Bar Association, backed by legal force. You cannot practice law without "passing they bar" as they say. The lawyers who operate the Bar can just decide they wont be replaced and then they wont, AI will remain a tool used by human lawyers.
>You cannot practice law without "passing they bar"
You are however entitled to represent yourself without passing the bar, and thus use the AI to help your case.
Even for the remaining lawyers, I imagine that their billable hours will crater due to competitive dynamics.
Perez Hilton tried this with some success: https://www.cjr.org/feature/perez-hilton-og-original-news-in...
interesting, so you can rep yourself, with assistance from an ai? or maybe someone you hired to use an ai, present as an amicus curiae?
You can't create derivative works of copyrighted material either, yet here we are. I'm sure they'll find a creative loophole.
I agree, though I suspect we'll see something similar to what has happened with Doctors, where companies essentially rent the credentials.
Given they have the power of life and death in their hands having them licensed and accountable is peace of mind.
Surely unions are too powerful in several industries. Police, medicine, and law. But not having some association holding these people accountable is a bad idea.
Most of these industry guilds tend to be capricious but forgiving, determined to protect members. Almost every (North American) union puts its members' well-being ahead of any possible accountability, which makes sense, but means they cannot be trusted to self-regulate.
Just like AI will kill all the software developers...
Obviously AI will change the legal industry. But a lawyer will still have an advantage because they know what questions to ask and can provide the AI with the relevant context.
And they know when it’s right and wrong.
Recently I asked Claude if I should convert my LLC to an S Corp for tax savings, and it sang the praises of how much I’d save if I did this.
When asked my accoutant, he pointed out that since I live in NYC, the S corp would be taxed in such a way that would completely wipe out the tax advantage I’d get elsewhere, and I’d likely end up paying more if I did this.
Did you tell the LLM you were in New York though?
Its a funny take because this reddit thread seems to suggest the opposite. Pro se litigants (people representing themselves) are using LLMs to create more lawsuits resulting in more work for lawyers.
https://www.reddit.com/r/Lawyertalk/comments/1n9cwfv/pro_se_...
It has indeed killed several careers at this point, but not because it was better than a human, but because a lazy human used it and didn't check their work.
This article talks about martinis about as much as it talks about the careers of lawyers being threatened by AI. The article provides no real justification for its claims outside of anecdotal opinions. The only value of this article is that it results in a discussion in the comments section that provide the actual credence to the claims.
Like so many of these articles about how "AI will/won't do X" it just feels like everyone is speculating.
The only thing I feel confident about is that people are bad at predicting the future. Why can't we just wait and see without all this overconfident guessing?
Can you imagine a world without lawyers?
https://www.youtube.com/watch?v=uG3uea-Hvy4
One could only hope.
I read somewhere that it's not going to happen because the AI can't play gold with judges, senators, and congressmen over the weekend.
You're right. Gold and golf are at the center of the legal world.
Legal representation is the sibling of security.
Security itself is a journey, not a destination. To say that you are secure is to say that you have been so clever that nobody else in the history of ever again will ever be as clever as you just were. Even knowing that they can study you being clever.
Even a super intelligent AI might not be able to replace lawyerhood unless it is also dynamically going out into the world and investigating new legal theory, researching old legal theory, socializing with the powers that be to ensure that they accept their approach, and carefully curating clients that can take advantage of the results.
Very much doubt, what you’ll see is it killing off paralegal work.
In most jurisdictions legal advice is a regulated and restricted activity. Qualified lawyers today get themselves into trouble without AI advising on areas they have no right to practice in.
Any pure information-processing profession will be affected - programming, legal, teaching, financial, medical diagnosis, research, writing, videos, movies, music, art, design, architecture, business consulting, marketing, gaming, dating, chat, voice, customer support, real-time monitoring, ...
Any physical world interaction might survive for more time - cooking, goods delivery, transport, construction, medical testing, field work, lab work, class room work, handyman jobs, factory work, farming, mining, fishing, travel & tourism, retail shops, offices, gym, sports, fashion, hardware,..
> I mention the problem of ‘hallucinations’ – when an AI model presents false or fabricated information as factual – and the need for a human face in court. The Sandie Peggie judgment allegedly contained AI-made errors. He waves this all away. ‘Temporary bugs and sentimental preferences. The economic argument is overwhelming.’
As usual with "AI replacing humans", the key thing to consider here is accountability.
I want to get my legal advice from someone who is accountable for that advice, and is willing to stake their professional reputation on that advice being correct.
An LLM can never be accountable.
I don't want an LLM for a lawyer. I want a lawyer armed with LLMs, who's more effective than the previous generations of lawyers.
(I'd also like them to be cheaper because they take less time to solve my problems, but I would hope that means they can take on more clients and maintain a healthy income that way even as each client takes less time.)
The closing paragraph of that story:
> ‘My niece is a lovely girl, really smart, great at school, and the other day she told me she wants to be a lawyer. And I thought, “Oh my God, my little niece wants to be a lawyer”, and I flat out told her. I said please do not destroy your life. Do not get into a lifetime of debt for a job that won’t exist in ten years. Or less.’
Uh oh. Here we go again, with the "don't bother studying computer science, it's 2002, all the jobs will be outsourced to cheaper countries in the next few years!". So glad I didn't listen to that advice back then!
> I want a lawyer armed with LLMs, who's more effective than the previous generations of lawyers.
From what we've seen thus far, there's a non-zero chance the lawyer armed with LLMs will submit a brief generated by said LLM without reviewing it, which makes the judge none too happy.
Look at how people handle bringing their cell phones with them while driving. Some people won't use it at all. Some will play music (unrelated to driving but overall neutral as long as they aren't fiddling with it). Some will use it for GPS driving assistance (net positive). But, many will irresponsibly use it for texting/talking while driving, which is at least as bad as being inebriated and can lead to harming themselves and others.
Don't expect people to be any more or less responsible with LLMs.
I want my lawyer armed with LLMs to not do that.
There are some promising AI-driven tools these days that use search against archives of cases to help check that citations aren't garbage. I'm hoping lawyers start using them to help pick apart each other's laziness.
> I want my lawyer armed with LLMs to not do that.
The only way to guarantee that is to have a lawyer not armed with LLMs.
We've seen dozens of examples already of lawyers doing exactly that. (Some of them have then doubled down in court, to their eventual detriment.)
If you're making a habit of using LLMs to draft briefs for you, how long before you just forget to check the cited cases to replace the hallucinated ones with real ones? Or decide not to check, because surely they'll be fine this time...only they're not?
When I was thinking about law school the big panic was about e-discovery tools: we wouldn’t need many lawyers anymore since we didn’t need to rifle through boxes of physical paper anymore! What happened instead was that, with the burden of collecting documents significantly reduced, we were able to start looking for needles in much bigger haystacks.
> An LLM can never be accountable.
That's where AI businesses will make bank.
When they actually underwrite the risk of their models and sell that to clients - that's going to command an extremely high price premium.
The models aren't there yet, though.
I think theoretically, it's okay for LLMs to write legal briefs, to replace attorneys. Write the best argument you can with whatever tools you want.
What worries me is the idea of them replacing JUDGES.
Actually, that’s the high-value model. Imagine you have a bunch of LLMs tuned to different sensibilities that match great jurists, Oliver Wendell Holmes Jr., Learned Hand, maybe Aristotle to mix things up, maybe a real jurist. And your attorney tunes their arguments to be persuasive to whatever model they believe is dominant.
It’s a short leap to comparing model scores to determine a quick and dirty settlement “winner” which really isn’t that far from manual processes.
Lawyering will look different, but there definitely will be lawyers. Judging on the other hand…. Judging is the one I wonder about.
I imagine it would involve 1000s of LLMs outputting a judgement and then if there were significant disparities it would get flagged in some manner.
That's actually the plot of Minority Report, a lot of people think it is about "what if computers could predict crime" but it is really about "What do you do when your 'omniscient' machines disagree with each other".
Either way the idea of getting sent to prison and having 0 human interaction is terrifying.
That is something I hadn’t even considered. That is super scary; Part of me thinks it’s inevitable. People famously lack any sort of empathy for the falsely accused until it happens to them, so why wouldn’t they vote for a “save the children: use AI judges!” bill in 10 years?
”AI” started taking judges jobs at least over ten years ago. See tools like COMPAS.
Read Weapons of Math Destruction.
Isn't there a Chris Pratt movie about this coming out in January?
Not sure it will replace them, but a tool that allows folks to have a better understanding of the legal system and how to navigate it will certainly impact the existing power systems (of which lawyers are a part).
That sounds just like the argument used to replace programmers and we see what kind of hell that's causing.f
Which is not to say you're wrong, but maybe we should look at ways of making the transition better, easier and less stressful. Perhaps actually giving people a choice, rather than having technocrats ram it down our throats.
I think that choice is already happening in a way that is as natural as we're going to get. I found the recent legal business with Perez Hilton kind of interesting. Take this passage from this story https://www.cjr.org/feature/perez-hilton-og-original-news-in...:
> Still, there was a problem. Hilton’s insurance would not foot the bill for a lawyer to defend against a subpoena. He would have to cover his legal costs out of pocket. Instead of finding an attorney, he did two things legal experts always advise against: he decided to represent himself and to use ChatGPT to help draft his legal briefs. At first, this did not go smoothly. An early filing written by ChatGPT, which Hilton nicknamed Dad, invented several legal references. “There’s this phenomenon called ghost law,” Hilton said. “They make up citations, they make up anything.” After a set of embarrassing errors was called out on social media, Hilton started double- and triple-checking every citation, and asked ChatGPT to review its own output. The process went more smoothly from there—so much so that Hilton came to see AI as a great legal leveler. “Now that I know that I can so effectively use ChatGPT, I’m not going to be paying a lawyer unless it’s absolutely necessary,” he told me.
The (imperfect) tools gave him the ability to keep his case alive, and he was eventually taken up by the ACLU. While Hilton is far from a sympathetic underdog, the levelling effect is pretty compelling.
"The justice system works swiftly in the future now that they've abolished all lawyers."
-Doc Brown
Someone publishes this story every 3 months. One more gullible senior attorney getting on the hype train is not news. LLMs are very, very, very good at making words look pretty, which has always been a cherished talent that lawyers liked to think only they possessed. But even before LLMs, you wouldn’t pay a lawyer much if all you needed them to do was to write a brief with no investigation, discovery, or motion practice involved. Grok’s output looks like great lawyering because it’s the product of great lawyering - great lawyering which enabled this source to spoon-feed the facts and the law to Grok, which makes this more like a law school writing assignment than actual legal work.
I agree but we aren't close to having AI, even in the slightest. At best we built a small portion of what constitutes an AI.
We're at a funny stage where some careers are becoming "post-LLM". For example, SWE is either rapidly approaching, or surpassing the point where LLMs can do most of what we traditionally viewed as day to day SWE work. However this doesn't translate into "no more SWEs." I have no doubt that what it means to be a lawyer day to day will shift with LLM advancements.
> AI will kill all the lawyers
That will the biggest criminal case processed without lawyers!
Law is very imprecise and subjetive, I really doubt that.
Of course the lawyers go for Grok
I've used Grok for legal work (falsely accused, and also in a divorce) and it is very good. I've used ChatGPT also and it is not bad, but not as good as Grok. This is just my own personal experience but I suspect others who have decided to try Grok end up sticking with it.
I really doubt this for one reason. lawyers makes the laws. Already we have seen massive push-back in legal areas where some lawyers were punished for using AI.
Once it looks like their profession is threatened, you will see many laws against AI.
I thought decades ago people found a way to avoid lawyers is a specific instance, I kind of remember doing that was made against the law. Not sure if I am remembering right, but I could swear that happened.
Edit: Reading the comments, I think it was the bar exam. IIRC there was a time you could take it without a degree, that was changed to force people to go to to college and get a degree.
No, AI will not kill all the lawyers. AI is autocomplete slop, it can't think logically....
good
Allow me to explain a contrarian position. Judges favor individuals that use an expensive lawyer for representation, even if there isn't much of a legal argument to be made. Judges give such individuals a far better deal. The reason for this is that hiring an expensive lawyer shows that you've paid homage to the legal profession with your wallet, that you support the systemic judicial-attorney-penalty complex. It grants you favors.
If now you were to come forward with an AI lawyer, in practice it'll be almost as if you didn't use a lawyer at all, as if you were representing yourself, which will get you the worst possible deal, if any. Things shouldn't be this way at all, but the system is crooked, and so they are this way.
As such, I think some lawyers are going away, but not all. The ones who stand in court will have business.
> Allow me to explain a contrarian position. Judges favor individuals that use an expensive lawyer for representation, even if there isn't much of a legal argument to be made. Judges give such individuals a far better deal. The reason for this is that hiring an expensive lawyer shows that you've paid homage to the legal profession with your wallet, that you support the systemic judicial-attorney-penalty complex. It grants you favors.
This isn't why expensive lawyers tend to get better results in court, or why those who represent themselves often end up screwed. I'm against the legal monopoly system, but this is out-of-touch and silly.
> but this is out-of-touch and silly.
Expensive lawyers can get better deals in court even for run-of-the-mill cases. Why is this? Are cheaper lawyers so dumb that they can't even handle common cases?
Expensive lawyers have better relationships with opposing parties, have more and better legal research both already on hand and available to be done, with more and smarter people doing research, with more and more experienced people available to consult, and may hire outside consultants when the situation calls for it.
They can also better afford to play dirty in various ways, from burying you in discovery documents to dragging things out with various motions.
And in general, yes — they often also have at least slightly smarter lawyers (and more eyes on the case). That doesn't mean cheaper lawyers are dumb, and there are smart lawyers out there who aren't incredibly expensive, but the average intelligence goes up noticeably as you interact with more expensive firms (which tracks, because they hired people with the top performance in and possibly after law school).
AI may close the gap on some things, but not others.
FYI, even lawyers who represent themselves generally don't do well, no matter how smart they are or how much experience they have in that area. But that's not because the judge wants them to pay into the cartel -- it's because law is hard, there are a million factors affecting performance on a particular case, and one major factor is ability to keep perspective on your case. People are uniformly terrible at this when looking at their own cases.
Cheaper lawyers can’t afford to pay for as many research librarians, paralegals, junior attorneys, writing consultants, jury consultants, etc. LLMs may level the playing field in this regard. But of course, the expensive lawyer might be able to pay for more tokens.
I would argue the us supreme court is already doing that
> ‘My niece is a lovely girl, really smart, great at school, and the other day she told me she wants to be a lawyer. And I thought, “Oh my God, my little niece wants to be a lawyer”, and I flat out told her. I said please do not destroy your life. Do not get into a lifetime of debt for a job that won’t exist in ten years. Or less.’
Bets that this won't happen in just 10 years?
DARPA Grand Challenge took 20 years, and it's still not on the interstates. Waymo is amazing, but it's still a work in progress.
I know it's coming, but solving problems that require 99.999% correctness is hard work. Mistakes multiply.
A toy can be ready tomorrow, but a precision legal tool needs to be better than humans. Not unlike driving 70 mph on the interstate highway with hands off the wheel.
Lawyer has been a poor earning profession (for the education needed) for a while now. There are a few expensive lawyers who make a lot, but your typical lawyer is not highly paid. This varies by state/country of course, but in general I don't advise going into law because it isn't worth the costs to get in for most. (though just like art, music... there are some exceptions)
A person let's their case be argued by ChatGPT Esq. At sentencing:
"Your honor, the death penalty for a traffic ticket?"
AI will never replace humans in this capacity. Lawyers may be scummy but most people would take a slimeball lawyer over a hallucinating, sycophantic "AI" pretending to be both a human and a lawyer. This reads more like astroturfing by Sam Altman to keep the ChatGPT hype going while he cashes out.
[dead]
[dead]