r/BetterOffline 9d ago

Founder of Google's Generative AI Team Says Don't Even Bother Getting a Law or Medical Degree, Because AI's Going to Destroy Both Those Careers Before You Can Even Graduate

https://futurism.com/former-google-ai-exec-law-medicine

More "wishful" thinking from the business idiots. Can't wait to see how they intend to have AI replace trial judges and surgeons.

346 Upvotes

184 comments sorted by

365

u/ziddyzoo 9d ago

I just looked up the guy. His name is Jad Tarifi.

He has a computer science PhD.

All he’s done is work for tech companies: from Amazon intern to Google dood to AI princeling.

He does not have a medical degree and has never worked one day in medicine.

He does not have a law degree and has never worked one day in the law.

He does not know one single picofuck about what he is talking about.

127

u/HaggisPope 9d ago

If anything it feels like comp sci is less safe than medicine 

48

u/dingo_khan 9d ago

Even that is probably pretty safe where real work is being done. These are a long way from being ready for prime time. Idiots will try to force it and ruin a lot of things but its not ready.

22

u/[deleted] 9d ago

I don’t think they’ll wait until AI actually “works” to start trying to force AI into these positions on a massive scale. They’ll wait until it’s close enough that the cost of fixing the many fuckups is less than the cost of labor. But it still seems far away from clearing even that bar.

10

u/Malusorum 9d ago

There's no way that AI can even approach 'close enough '.

To work in those two fields requires sapience, that would essentially make them people, so it would be slavery.

What people call AI is really just LI (Limited Intelligence) with advanced learning algorithms.

AI has neither sentience nor sapience. Even animals have sentience. AI is dumber than any animal on the planet, they just have access to more knowledge and can communicate in human language.

2

u/[deleted] 9d ago

I agree with you, I just don’t think the people in charge of large swaths of these industries will feel this way

2

u/Malusorum 9d ago

They hear AI and imagine that it must be something like in fiction.

2

u/[deleted] 9d ago

Right, I think that’s true. There’s an implicit trust in technology to be infallible and perfect.

2

u/Malusorum 8d ago

Only some technology. Specifically AI as AI in fiction is always gassed up to be something it can never realistically be if one desires it to exist.

In most other cases, people are distrustful of new technology. When they hear AI that goes out the window, that goes for both those in favour of and against, and they attribute traits or future traits to it that'll never be.

12

u/dingo_khan 9d ago

I am not even that optimistic. I used to have a role reworking code that was made "cheaply" offshore so that it actually worked. The aggregate cost was higher but it looked about right to whomever was paying the bills.

8

u/[deleted] 9d ago

That’s fair, I’d amend my previous statement to say “they’ll wait until they think the cost of fixing the fuckups will be less than the cost of labor” or something like that

1

u/dingo_khan 9d ago

Totally makes sense.

1

u/psynautic 9d ago

if things go the way that theyve been going; and it turns out that these LLMs people are using with claude code are like actually priced near enough their profitability (a lot of ifs) I would say itll be a lot harder in the next decade to get a job out of college as a CS gradutate.

6

u/CaptainXakari 9d ago

For pure programming, maybe. There will still be a demand for people who can read code because plug-n-play coding has a lot of potential issues. Other aspects of CS degrees will be less affected. Networking, Cybersecurity, etc will still need people that can utilize the tools. AI can’t physically hot swap network devices.

2

u/dingo_khan 9d ago

AI systems will probably have common blindspots when it comes to design and security as well. Humans in the loop to actually come up with novel attacks and novel solutions are probably safe quite a while.

2

u/dingo_khan 9d ago

Depends on what is needed. Prevalent high-end libraries and off-the-shelf open source solutions have not slowed things down. I do think pure coding jobs will be harder to get and pay less, assuming those ifs. CS graduates who focus on the computer science side more than the coding side will probably do okay though.

1

u/psynautic 9d ago

i would argue; it did slow things down for JR hiring. i would argue that its been really hard for jr's to get into the business for a while now. since the covid hiring explosion turned into a layoff machine.

2

u/dingo_khan 9d ago

That is the big one. The over hiring from covid turned into a layoff machine... Then, that machine pointed at senior devs as forced renegotiations. No surprise groups all seem to fire at the same time: it puts a lot of people out of work and many willing to accept less.

2

u/Malusorum 9d ago

That depends on the job. If it requires sapience on any level, then there's no chance AI will ever take the job. Dumbfuck employers will think that they can do and replace human workers, until they figure out that doing so is way too expensive.

0

u/psynautic 9d ago

i expect jr jobs will be replaced, not by ai directly (like devin) but because of expectations of higher output put on me and other sr devs

1

u/Malusorum 9d ago

Wtf is a jr job?

1

u/psynautic 9d ago

jr developer

1

u/Malusorum 9d ago

Ah, and I doubt it. AI can only copy what it knows, it can never invent anything. Would be a great tool for assistance though.

0

u/psynautic 9d ago

my point is that less JR devs will be hired, because there will be more burden and expectation on sr devs because they are expected to utilize LLM tools to be more efficient.

→ More replies (0)

15

u/gamanedo 9d ago

CS is by far the safest. If you can replace those guys, you can replace literally anyone. “Computer, ugh… code me up a heart surgery robot.”

10

u/Dear_Measurement_406 9d ago

That’s what I’ve been saying. I did comp sci and am in non-AI related tech and people always be asking if I’m worried about AI. And I’m like bitchhhh if it’s somehow able to come for my job, I guarantee it’s already taken yours months prior to that and to clarify, I don’t think it’s coming for anyone’s job.

2

u/Signal-Average-1294 8d ago

yeah this is what my position is, when talking about AI. Computer science is about using computer to solve problems and automate things (oversimplification, yes i know). if computer science becomes automated, that means that literally any problem we have can be solved lol.

3

u/Resident_Citron_6905 9d ago

We shouldn’t outsource responsibility to generative AI in any of these fields. Humans are more resilient to imperfect diagnosis and treatment than computers are at running imperfect code. A datastore will not attempt to autonomously self-heal after it has been corrupted by ai generated code. Proving retroactively that a diagnosis was imperfect and thus the cause of a death can also be difficult.

1

u/TheMonitor58 9d ago

The unemployment numbers agree with you.

-1

u/[deleted] 9d ago

[deleted]

4

u/Gras_Am_Wegesrand 9d ago

What... I work in healthcare and there's no actual AI in sight in most specialities, apart from interns who try to bullshit their way through stuff with ChatGPT and are failing miserably.

Medicine is brutal because the healthcare system is absolute bullshit. Not because of "AI".

40

u/RelevantTangelo8857 9d ago

I love this take, because that was my thought. I was wondering when we gave these tech/C-level folks this kind of omniscience. They're good at the things they do (sometimes), but they don't know shit about other things.

Tycoons of old were single-minded. They wanted to control an industry, then whatever municipality surrounded it.

Tycoons of today are overambitious and damned near supervillains. They want world domination, they want to be in your head, they want to think for you, they want to mine you and through you, they want to control the system.

It's too damned much, lol. Just build the damned AI railroads, bust some trusts and fuck off.
I miss the elites being in the shadows. The opening of their mouths removed all doubts that they were not fools.

20

u/dingo_khan 9d ago

We allowed a mythology to spring up around CS and, worse, IT. Now, we live in the world where that mythology has primacy.

3

u/Zealousideal_Job2900 9d ago

It’s projection. The current LLMs are perfectly capable of bullshiting people, producing slide decks, etc. So these guys don’t see why what applies to them wouldn’t apply to other pretty well paid jobs.

2

u/Character-Pattern505 9d ago

People with any amount of fame get a long leash.

Why do we care at all about what a famous actor says about anything?

https://www.youtube.com/watch?v=Mo-ddYhXAZc

15

u/Brilliant-Aardvark45 9d ago

Picofuck is so funny to me, holy shit

3

u/wildmountaingote 9d ago

Now that picofucks are a thing, let's begin research to see if we can scale all the way down to zeptofucks. There's a delightful euphony in the <z> and <p> so close together.

3

u/ziddyzoo 9d ago

honestly as the word left my fingers I knew it was a banger

6

u/TheMonitor58 9d ago

Imagine going up to anyone in healthcare and saying AI is going to replace them with a serious face.

It’s literally like the funniest thing I’ve ever heard.

People’s health is so much more than computational data. Even if you can get the perfect differential diagnosis using AI, you still need:

  • the data to be accurate which you cannot isolate for in AI
  • the protocols of the institution to be understood
  • the feasibility of a plan to be implemented
  • the patient’s feelings towards a plan to be considered
  • the context of the patient’s situation to be understood
  • The perspectives of any other healthcare teams to be considered
  • to understand the goals of the care you’re providing

And you need to do all of this, while also managing dozens of other completely unique and individual situations all the time. There’s honestly even so much more that you can’t even fit onto a list, like I didn’t even mention picking the medication for treatment.

Anyone who is out there telling people “AI is replacing your job next year!!” Is this year’s rendition of the influencer telling you to buy their class on how to trade stocks.

Just another snake-oil salesperson.

3

u/a-amanitin 9d ago

I just imagine a family member frantically arguing with ChatGPT about RDW or chloride levels while their family member is crashing right next to them in some AI-controlled ICU somewhere

2

u/420catloveredm 9d ago

As someone who works in mental health care… it’s the most insane statement.

1

u/Weird-Count3918 9d ago

mentally insane statement?

6

u/Synensys 9d ago

You know what seems like its about to be taken over by ai - management. Llms are ahitty at alot of stuff but they are great at creating fluff bullshit

5

u/indie_rachael 9d ago

This is the same thing I've said about my field (finance/accounting). It's wild how they keep pitching this shit, and I have to keep explaining that as long as AI is making a statistical prediction of the likely answer instead of, oh I don't know, CALCULATING THE CORRECT ANSWER then we'll never get replaced or even find much value in it.

We need 100% accuracy 100% of the time (nb4 someone says "wHat ABoUt mAtERiaLitY" because that's not a gamble you should be willing to make on something that does not calculate the underlying math involved, it just guesses).

2

u/Wiyry 8d ago

I expect a hammer to hit a nail 100% of the time I swing it.

If you gave me a new hammer that randomly misses the nail 30% of the time: I’d ask what the fuck is the point of your tool?

6

u/Technical-Pitch2300 8d ago

As a lawyer, I can only pray I have the opportunity to sue this dude at some point. I may not be the best in the world, but I know for a fact I can beat a pro se litigant with a shitty AI paralegal.

3

u/Tartan_Acorn 9d ago

You could have stopped at jad tarifi lmfao

2

u/electricmehicle 8d ago

“Picofuck”

That is a perfect word

2

u/ziddyzoo 8d ago

I am a river to my people

2

u/desiInMurica 8d ago

Sounds about AIBro

2

u/gillyrosh 7d ago

Engineer's Disease strikes again.

1

u/eatelon 7d ago

Easily the most arrogant profession in the world.

1

u/Longjumping_Fly_2283 9d ago

He's the pretender salesperson Demis Hassabis ultralite. Demis pursued medical (neuroscience) education instead of rejecting it. I'm a 'shill' of his because I am a strong chess player and was awed at their gaming-AI work from a decade ago.

1

u/wise_____poet 8d ago

Nevermind any of the "creative" fields. He should go make a box office release with his ai, I'm sure it will be a hit

1

u/Yasirbare 8d ago

He could make a lot of people headless follow the advice and there will be missing a lot of doctors and lawyers when time comes. Perfect storm when the total non-biased AI is going to misdiagnose and handle the court case.

-1

u/getoutofmybus 8d ago

Why would he have a law degree or a medical degree bro he has a PhD in CS and works in ML

2

u/ziddyzoo 8d ago

The clue is in the final sentence of my comment

81

u/bobojoe 9d ago

As a lawyer I can tell you I’m sure that AI is going to disrupt the practice, but I also think these tech people aren’t really very aware what our jobs truly entail when they make these predictions.

38

u/Fun_Volume2150 9d ago

It’s gotten getting lazy lawyers sanctioned for trusting it to write briefs, so I’d say it’s already disrupting law practices.

22

u/naphomci 9d ago

As a lawyer, I don't think it's really going to do anything that impactful. Maybe with internal memos, but those often seemed like busy work to me anyway (I am so happy to be on my own and avoid stuff like that)

8

u/Panama_Scoot 9d ago

I also like the implication of LLMs being used for internal memos... and then having those internal memos fed into LLMs to summarize them for folks too lazy to read the AI slop... so eventually LLMs are just talking to LLMs.

9

u/bobojoe 9d ago

I’ve noticed a lot more coherent briefs by pro se litigants. The biggest change, which is welcome, is the ai search through Westlaw Precision. My research time has been cut by honestly 90% and client’s hate paying for research so it’s a win win.

2

u/naphomci 9d ago

Hm, I could see it helping pro se. I'm not writing enough briefs to pay for a full legal research (solo practice), so I haven't used it. How is it different from the previous search options?

6

u/big_data_mike 9d ago

Yeah that’s true for any job. And doctors and lawyers handle life altering decisions/events. No way we’re ever going to hand that over to AI.

-3

u/Lain_Staley 9d ago

How quickly until an AI (Robot) proves 99.9% effective in a procedure vs a human who is 95% effective does it become unethical to not offer AI?

4

u/big_data_mike 9d ago

A really, really, long time. Getting 80% there is relatively easy. Getting to 90% takes a lot more time and effort. Getting to 95% even longer than that. You have to start programming in a ton of edge cases.

Also doctors and lawyers use their emotional intelligence a lot. They can sense when someone might be lying, angry, depressed, etc. They don’t follow a mathematical formula to determine how to treat a patient or help a client. They draw on experience that you can’t translate into machine code.

-2

u/Lain_Staley 8d ago

Also doctors and lawyers use their emotional intelligence a lot.

Not only was AI more accurate at diagnosing then doctors, AI was ALSO found to be more empathetic than human doctors: https://www.health.harvard.edu/blog/can-ai-answer-medical-questions-better-than-your-doctor-202403273028

3

u/big_data_mike 8d ago

Diagnosing is only a small part of what doctors do

0

u/Lain_Staley 8d ago

That's good, that means they'll have one less thing to do in the next couple years. They seem busy

1

u/No_Maybe_312 2d ago

It's more "empathetic" because it's also incredibly sycophantic.

1

u/Lain_Staley 2d ago

At the end of the day, will patients feel more comfortable opening up about their condition to something sycophantic or something curt?

5

u/RunnerBakerDesigner 9d ago

This is the biggest fallacy with the claims. The people making them are so far away from the nuances that they'll say anything and these predictions are made for idiotic investors.

3

u/SongofIceandWhisky 9d ago

I don’t work in e-discovery but I’m sure it will have an impact there. We see our legal interns are overly (in my opinion) dependent on AI to draft memos but at least they check the citations. I’d guess ai would be helpful in comparing terms between documents, but the real work of contracts (and probably most legal jobs) is in negotiating. Computers cannot and never will human.

3

u/Particular-One-4810 9d ago

As well, there is no industry more resistant to change than law. There are still courts that handle documents by fax.

2

u/420catloveredm 9d ago

When I see AI in a courtroom I’ll believe it.

1

u/fllr 7d ago

Ehr… i have a different take as a CS. It will disrupt jobs… eventually…! This new set of AI is just so far from that reality, though, that I’m not sure if it will happen within our lifetimes. I know one thing, though, the day it can indeed replace doctors, lawyers, is the day everyone loses their job at the same time.

135

u/TheShipEliza 9d ago

keep getting law degrees. someone needs to send these nerds to jail.

21

u/MrOphicer 9d ago edited 9d ago

You might be onto something pretty deep... if they own the AI that articulated in law, there's a clear conflict of interest.

4

u/maverick-nightsabre 9d ago

laws? Where we're going we don't need laws

3

u/TheShipEliza 9d ago

i mean we absolutely do.

3

u/Sunshine3432 9d ago

"AI will tell what is legal and what is not, there will be no need to read the law anymore, just give us a few more datacenters, trust the process"

2

u/TheShipEliza 9d ago

you have to login at matlock dot net

1

u/ArmedAwareness 9d ago

Based on

1

u/TheShipEliza 9d ago

precedent.

1

u/amethystresist 9d ago

Literally lmao. Humans still exist so there's going to be Human vs AI owner lawsuits 

1

u/TheShipEliza 9d ago

goddamn we are in for the most boring new john henry shit.

43

u/-gawdawful- 9d ago

Let’s discourage young people from advancing themselves, and then completely flop on their supposed replacements! Surely this isn’t a recipe for disaster.

3

u/thevoiceofchaos 8d ago

At least they won't have student loans, I guess.

41

u/pizzapromise 9d ago

Looking at the world today and saying out loud “there should be less doctors” shows the complete and total disconnect these people have from the average person.

11

u/DragonHalfFreelance 9d ago

Seriously we are already seeing many places ona. verge of a medical system collapse because of supply chain issues and lack of doctors or whoever is on call is burnt the hell out. We need more doctors and nurses and more support for them, not less. Do these tech bros assume they will be super healthy and never need medical attention ever? Already worried for the aging Boomers and that effect on the already strained medical systems let alone if there will still be one when ever other generation ages out.

35

u/Bortcorns4Jeezus 9d ago

Is Google planning to replace its legal team with AI? 

34

u/LordBarglebroth 9d ago

They should do it first, as a test run.

11

u/Candid_Meaning4501 9d ago

Yes they should put Gemini in charge of their antitrust cases as a show of faith in the technology

4

u/AceJZ 8d ago

This guy isn't at Google, he left in 2021 to do an AI startup.  So more self-interested pumping.

Even assuming we had AGI tomorrow, it couldn't physically perform surgery on you or represent you in court.

25

u/Velocity-5348 9d ago

In related news, a ten year old is certain they can defeat, like fifty guys, with their ninja skills.

6

u/ynu1yh24z219yq5 9d ago

Yeah my sons flag football team also might be turning pro this year according to them

3

u/JAlfredJR 9d ago

Yeah, well my two year old is, in fact, a real puppy, according to her.

4

u/Mightyshawarma 9d ago

And Ben Affleck believes he could have stopped 9/11 from happening if he was on the plane!

52

u/OhNoughNaughtMe 9d ago

Yes, it can’t spell Tennessee but it will be able to remove a burst appendix no problem.

19

u/elowoneill 9d ago

they do shit like this then wonder why young people don't care about anything anymore

9

u/Skyguy827 9d ago

Young people are being told to not bother pursuing anything and that Gen z are lazy and don't want to work at the same time

18

u/Knitmeapie 9d ago

I find it poignant that people who say AI is going to take over certain jobs are never knowledgeable about those jobs that they claim are going to be taken over. Anyone who works in any field that has some form of education and esoteric knowledge can tell you that the general public knows nothing about what they do. From the outside, a lot of careers look much more simple than they are. Saying that AI is going to take over jobs that require years of study to be able to practice is so incredibly ignorant.

16

u/vectormedic42069 9d ago

This is so negligent that it feels like this should be something he could be fined over. Like libel but for statements that are actually just damaging to society as a whole.

We have a horrendous shortage of healthcare providers and physicians which is only getting worse due to the massive expense of education, predatory fees, predatory employers, overwork, burn out, etc. etc. and to have some fucking business idiot actively attempt to dissuade anyone just to market his company's shitty chatbot is probably going to indirectly cause deaths in the future.

8

u/JAlfredJR 9d ago

Yep. My wife is an RN so I know it all too well. The amount of management bloat at any given hospital system is insane. The talent (doctors, nurses, even MAs) props up these dopes who play office all day all while giving themselves bigger bonuses every next year.

It's absurd.

14

u/Alternative-End-5079 9d ago

Im sure the AI lawyers will argue in court before an AI judge. /s

15

u/Sidonicus 9d ago

Why are pro-AI people so scared of people learning skills?

Oh right: making people stupid, isolated, and mortally dependant on technology is the point. 

A dictator can more easily shape his ideal future with a stupid population than an educated one. 

7

u/Weigard 9d ago

Because they're insecure about their own lack of skills.

3

u/CapybaraSupremacist 9d ago

Even then what would be their goal? Like if everyone stopped learning skills then there would be no future workforce and the economy would collapse. Not to mention the population’s intelligence would be severely affected as well. The apathy from them is appalling…

6

u/JAlfredJR 9d ago

It's all based in this overwhelming apathetic disposition of a small group of f'ing weirdos who are behind the whole sci-fi "AI" / LLMs. They're literally anti-human, anti-social weirdos who need to fuck off.

I actually like humans (well, some of them) and my family.

10

u/pizzapromise 9d ago

So the AI that can’t get my order right at the Wendy’s drive thru is going to tell me I have cancer one day?

7

u/ThoughtsonYaoi 9d ago

He was actually talking about PhD's, not practice.

But his argument was: these fields are built on memorization. AI will change what you have to memorise, and it will take over the need to memorise in general.

Both arguments are built on a faulty premise (a PhD is built on memorization). And he doesn't actually get very specific on how.

13

u/LordBarglebroth 9d ago

PhDs require you to do research on something and expand the limits of human knowledge ever so slightly.

He has a PhD. He should know this.

He is being deliberately obtuse to sell his product. This is disgusting.

3

u/JAlfredJR 9d ago

'deliberately obtuse to sell his product' should be the slug line to the entire industry.

2

u/Embarrassed_Camel422 9d ago

I mean, most programmers google the crap out of everything, and yet after 30 years of that, they still evaluate you on things you could look up in 5 mins or less in interviews.

So while I absolutely don’t have to memorize as much anymore to do my job functionally, lol I still have to, and have to drill LeetCode crap and practically memorize it in order to demonstrate that I know what I’m doing.

He needs to fix that in his own field first before he tries to tell other fields he doesn’t understand how to live.

7

u/LeafBoatCaptain 9d ago

Do these people think Doctors and lawyers are just walking databases and nothing more?

8

u/nehinah 9d ago

The funny thing about these guys is they never think of who is going to take accountability for these decisions that the AI is going to make in high stakes circumstances.

5

u/esther_lamonte 9d ago

These people are so disconnected. When they describe these abhorrent futures it’s actually insane they don’t perceive them as cataclysmic. This glee about reducing human interactions and removing jobs is perverse, the only vehicle by which most humans can survive and even hope for something approaching personhood in a society. This world they always describe is nothing I want to see. It sounds awful.

4

u/Leather_Floor8725 9d ago

Irresponsible AI hype man. Basically scamming people to pump stocks. Hope no kids actually take this seriously.

6

u/Matt_Murphy_ 9d ago

hey google: no. you don't get to build our world. I'll keep my real doctor, thanks.

4

u/MirthMannor 9d ago

I work in legaltech and have a law degree. I’ve done machine learning in this context.

A big issue is that, for as much law as there is, the corpus isn’t big enough, and it is very thin in active and developing areas. Tax law changes every year.

Can AI churn out standard documents and boilerplate clauses? Yes, but no one was doing that: photocopiers and ctrl-c / ctrl-v from a template have been what lawyers have been doing for centuries.

Can it digest opposing counsel’s reply brief? Sure.

But it can’t be depended on to explore the law and return 100% solid results, track obscure legal processes (“do I need to file a Form 310 in Western Queens?”), or to craft an argument that SCOTUS or any lower courts will affirm.

As for giving it a scalpel and chemo drugs… I’m not qualified to give an answer other than “you first, bro.”

6

u/throwawaythatfast 9d ago edited 9d ago

Apparently, those guys get a kick out of destroying professions (or at least claiming they will). WTF are they contributing to society, besides making already extremely rich billionaires even more rich - which is actually making society worse?

"The best thing to work on is more internal," Tarifi told BI. "Meditate. Socialize with your friends. Get to know yourself emotionally."

Cool, do you mean you're paying all my bills?! Great! I'd love to go meditate, know myself deeper and socialize!

4

u/DrBoots 9d ago

Anyone financially invested in a technology like this cannot be trusted to be honest about it. 

Their job is to say the world is full of nails and only they can sell you the hammer. 

5

u/Apprehensive-Fun4181 9d ago

Commerce has now hijacked everything and owns the failures that will result.  

3

u/nickybont 9d ago edited 9d ago

We've had cocky AI PhDs at our company, coming in saying they're gonna revolutionize our industry (i work in semiconductors, specifically testing) and 2 years later they've come up with an automation tool leveraging AI for test program automation (which involves high layer pcbs) which after evaluation from our application team rendered to be super inaccurate and outright useless, it was a total disaster, and a massive waste of money. Moral of the story was that we need much more collaboration between the CS and EE folks, and if anything, the application folks that leverage traditional basic AI (prompting, vibecoding) themselves render to be massively more impactful than any involvement from the AI folks at all. I can't imagine it being any different for medicine or even law.

I've frequently had to pushback heavily against AI folks (scientists as well as managers/vps) in our engineering meetings that make naive claims about what they think is capable. Especially when they throw a lot of vernacular that intimidates and questions capability.

4

u/Gold_Satisfaction201 9d ago

Dumbest shit I've ever heard in my life.

3

u/Alternative_Horse_56 9d ago

If you think lawyers will let AI replace lawyers without getting regulations or laws passed to prevent it, then you've never met a lawyer.

5

u/Actual__Wizard 9d ago edited 9d ago

Yeah, that's their plan for us.

They're going to take the jobs, get the money, and then let everybody else die.

They don't care about anything, or anybody. They're just greed monsters.

In the Google sub the other day, somebody was passing out their old authoritarian hiring questionnaire, with a bunch of questions that are designed to steer people with fully functioning brains away. That's how they think: They don't want people that know what's going on in the world... They just want people that can make them some money...

It's dripping with this horrifically over biased attitude of "You're not a unix person are you, because if you are, don't bother talking to us." It's legitimately offensive... The might as well have written on there "If you, don't think that we're fascist dickheads, here's the tell that you're suppose to pick up on. Notice how we treat other people like total garbage? That's what you're signing up for."

Unix is the operating system that led to the internet boom... If they don't want to use the stuff that other people created, that's fine, but it's clear to me, that they're filtering people away, that would have a reasonable attitude about it... They might of well have just written on there: "Hey, if you're really serious, we're not actually looking to hire you, we're just trying to find dummies that we can underpay and manipulate."

It's pretty clear to me that they never considered what that questionnaire says to the applicant and to me it clearly says "don't waste your time."

So: Nearly 30 years of experience in the space, that's the "Douche Bag Hiring Funnel Maneuver." Great job team greed monsters.

I hope people who want to build a real business are taking notes because that's how this stuff really works.

3

u/Mundane-Raspberry963 9d ago

"I have a PhD in AI," he added, "but I don't know how the latest microprocessor works."

No surprises there. Almost all of the people pursuing PhDs in machine learning who I've interacted with are pretty unimpressive intellectually. That's also the case for the professors. It's almost entirely a grifter circus. Even the structure of that field is a grift (submit whatever bs you can to neurips and spend a lot more time on formatting the presentation than doing anything legitimate).

4

u/SnooCompliments8967 9d ago

Oh wow, so the guy who founded google's gen AI team is saying obvious lies for headlines... Their progress must be slowing down even more than we thought.

Also genuinely sociopathic because I guaruntee you some students are going to take this statement seriously and drop out or try to pivot out of their education to a new degree.

3

u/c3d10 9d ago

"In the current medical system, what you learn in medical school is so outdated and based on memorization," Tarifi told the website. Seeking advanced medical or law degrees is, to his thinking, tantamount to "throwing away" several years of one's life.

"I have a PhD in AI," he added, "but I don't know how the latest microprocessor works."

Someone please tell me how a chatbot is going to fix a broken bone or even just take a patients' temperature.

Being in medicine means that you have to know a lot of stuff without having to look it up, which, by definition, is memorization.

3

u/bastardoperator 9d ago

Another lie from the AI idiots. They can’t even get this shit to do customer service… it’s will never be a lawyer or a doctor.

3

u/PensiveinNJ 9d ago

It blows my mind how much harm these fucks are causing, it's not all financial or layoffs. They're literally forcing kids to change their entire career paths, abandon dreams, etc. based on fuck all.

I work at a restaurant in my stay alive job so some of the kids I work with are the starting college age. Their anxieties about their futures are tragic.

I fucking hate these psychopaths at the tech companies. I have enough reason to hate them for my own reasons but watching what they're doing to people around me too is infuriating.

2

u/PatchyWhiskers 9d ago

AI can help lawyers prepare documents faster and help doctors diagnose but it can’t do surgery or stand up in court.

2

u/oSkillasKope707 9d ago

Pure unadulterated techbro hubris.

2

u/ManufacturedOlympus 9d ago

Damn, this really derailed their plans to have classrooms in the metaverse where you pay for this degree with nfts. 

2

u/BeardedYogi85 9d ago

Im not going to a fucking clanker for medical or law advice

2

u/yrddog 9d ago

Hahahahhahahahahahahahahahahahahahahaha That will end well for everyone involved

2

u/Skyguy827 9d ago

"The best thing to work on is more internal," Tarifi told BI. "Meditate. Socialize with your friends. Get to know yourself emotionally."

Ok, but how does that help us afford to live?

AI worshippers live in a completely different reality

2

u/SeveralAd6447 9d ago edited 9d ago

I am not against the development or use of LLM technology, but I do really hate that we call everything machine learning "AI" like they're all the same.

There are some very specific uses for machine learning in medicine that are extremely effective, for example, but they are not at all related to large language models. Machine learning has been used to simulate molecular interactions, helping chemists discover novel molecules and develop newer, better pharmaceuticals.

So in that sense, AI is already massively advancing certain fields.

But it is not a large language model. It is not ChatGPT or Google Gemini. And it can only do one, very specific thing, not replace human beings.

LLMs have useful applications as agents in programming environments, where a developer can sort of order them around like a junior engineer to get more done quickly, but they are not really capable of functioning in that environment without human oversight. They very quickly start doing incredibly dumb shit, like attempting to fix bugs by changing the name of a variable, because in their training data, working code usually used a different var name, or something like that. So they are useful as tools, but not very much so for people who don't already know what they're doing.

Even in the entertainment industry, when generative AI has been used to cut production time, it still has to have professional artists touch it up after the fact, so it's not really replacing them so much as changing their job description and screwing them on pay.

I think this really is just wishful thinking from the business world. The reality is that less than 10% of the population of the Earth has ever interacted with anything related to LLM or genAI technology at this point. Stock market numbers and earnings quantify financial success in the short term, and that is really bolstering the confidence of these corporate types, but they're living in a class bubble. I highly doubt there will be a mass displacement of highly skilled professions like doctors, engineers or lawyers anytime soon.

2

u/Mundane-Raspberry963 9d ago

I wonder if the benefit to medicine promised by AI will actually outweigh the damage done to it by AI.

2

u/420catloveredm 9d ago

Such a ridiculous comment when referring to medicine since doctors don’t understand the body as well as we like to pretend they do and AI is basing its responses on what humans already know.

2

u/tired_fella 9d ago

Better Call SaulAI

2

u/Stu_Thom4s 8d ago

PR aimed at the big shareholders.

2

u/Drinker_of_Chai 8d ago

I put an ECG through ChatGPT at a training day the other day.

It diagnosed the first degree heart block example ECG as an atrial flutter.

Point is, finish your degrees and do not trust AI for medical advice.

2

u/No_Honeydew_179 8d ago edited 8d ago

Have you considered lying down and waiting for death?

Edited to add:

Oh my god, the shit he's saying:

"I have a PhD in AI," he added, "but I don't know how the latest microprocessor works."

Of course you wouldn't, you fucking numpty. You buffoon. You putrescent glass of raw milk. You badly-implemented in COBOL md5 hash function. That's microprocessor design. You're in AI. Those are completely different specializations, you absolute harlequinade stock character.

1

u/MegaManchego 9d ago

If I had to go to the ER, I don’t think I’d want AI involved. And certainly not without a buffer of 3 or so human layers between me and it.

1

u/Embarrassed_Camel422 9d ago

Only on very superficial fronts only of both of these fields.

Drs still have a highly physical component to their work. We’ve got decades of work to do and regulation to hash out before these bots are going to be performing surgery on their own.

While a bot can interpret rules and facts more diligently and impartially than a human can for law, what it can’t do very well is take in to account the spirit and social impact of those rules, which is where a lot of a lawyer or judge’s most difficult work comes from. GenAI chatbots can’t even give answers to relatively simple questions consistently yet without extensive prompting- I don’t see them being able to take on more ‘human’ tasks in complex court cases anytime soon.

The good news is though, these AI bots should be able to make absolutely wonderful assistive tools for these professions, and help these professions do their jobs better.

I think the fear of job-taking for these fields currently being drummed up is more a product of an extremely heavy-handed hype and investment cycle, rather than a sincere evaluation of these models’ capabilities.

1

u/IWantToSayThisToo 9d ago

I cannot WAIT for AI to take over medicine. 

1

u/livinguse 9d ago

Ok....so fields with massive amounts of nuance, interpolation of data to draw often contradictory conclusions are gonna be done by a machine that can do....well, not that?

1

u/SouthernTrailsGoat 9d ago

AI will probably create more job in law than anywhere else.

1

u/AlShockley 9d ago

I'm honestly starting to think the whole narrative of AI will be doing everything in 3-5 years is about as likely as Trump winning the Nobel peace prize without awarding it to himself. It's all grifting by the corpos to keep the bubble inflated a little longer. Big Dot com bust likely incoming. People who say past performance is no indicator of future performance often forget history when greed is involved. Lots of chatter about 90 something ish percent of AI pilots failing at companies. Gen AI is great for some things but it's not going to replace everyone everywhere in 3-5. Very curious as to which big AI company goes tits up first

1

u/peace4231 9d ago

Did I gratutate- Dumb Rick/ Tall Morty

1

u/Malusorum 9d ago

Yeah, the two professions where the ability to extrapolate and improvise, something "AI" is unable to do and never will be able to unless we figure out to make a sapient AI, WiLl CeRtAiNlY bE mAdE oBsOlEtE bY aI!!!

Such a statement is too ignorant to even be funny.

1

u/zeek979 9d ago

All the reason to go into those fields

1

u/AnomicAge 9d ago

As I just discovered the frontier models can’t even edit an essay, I think we’re ok for now

1

u/throwaway3113151 9d ago

Sounds like he works for Google’s marketing department

1

u/Living-Computer6336 9d ago

Ah yes, the next time I get into a car accident, just wheel me over to a laptop so Gemini can repair my punctured lung and broken bones. That'll work for sure! FUTURE!

1

u/Ymirs-Bones 9d ago

“I swear my payroll is justified”

1

u/Goldarr85 9d ago

Hmm…Is he saying that the AI will treat patients or stand in a court room too?

1

u/loomfy 9d ago

I just really think you can believe in the promise of AI and think it'll be transformative for humanity without saying embarrassing bullshit like this.

1

u/generalden 9d ago

"Grok how do I sue XAI? Oh, I can't? Thank you"

1

u/BrownEyesGreenHair 8d ago

The best AI can do is serve as a quick reference for obscure concepts/methods. It can’t solve problems.

1

u/ososalsosal 8d ago

Drinkin the Kool aid. Spiked with copium.

1

u/zepwnage 8d ago

Its amazing how a chatbot will just perform surgeries with a simple prompt

1

u/Ecstatic_Way3734 8d ago

judges and surgeons are lolzing at this technocrat

1

u/Purpgran 8d ago

Law more than medicine. Common law is so heavily based on written precedent. Medicine is the opposite where we’re just scratching the surface of what’s possible.

1

u/[deleted] 8d ago

So much hot air in this world

1

u/trode_mutagene 8d ago

A doctor says : this dude is an idiot, don't bother becoming a whatever is his too much paid job

1

u/banedlol 8d ago

He would say that

1

u/LobsterAgile 8d ago

Took Gemini a whole year to be able to set a 5-minute timer.

I think doctors are safe.

1

u/Honest_Ad_2157 7d ago

What he means is that the crash will undermine society to the point that rule of law will be gone and medicine will go back to leeches. So, yeah, but at least you won't have to pay off those 7-figure loans!

1

u/No-Veterinarian8627 6d ago

Fyi: There are already more than enough algorithms and projects that tried to make lawyers obsolete. Those are decades old and worked, finding mistakes, errors, and stuff in law texts.

Do you know what lawyers say? We will argue that away and stuff.

For medicine, you also had those data recognition things for like a decade or more. For moles, xrays, and stuff. It's the same shit. You need a person to verify if it is really something.

Sidenote: I could've sworn that a student, at my old university, wrote a master thesis about this (medicine informatics) and built a barebone gui for finding melanoma. Its nothing new but another way how to look for something.

1

u/Moonnnz 6d ago

Not high ranking employers. Opinion irrelevant.

1

u/Plowzone 5d ago

I’d want AI to be assessing neither of the materials associated with those fields honestly. It’s untrustworthy as hell.

0

u/nikoriz 9d ago

We need to bully these tech nerds again

-1

u/SnookeredWorld 7d ago

You are wrong. It will be done in stages. The vast majority of doctors are NOT surgeons because they don't have the skill (touch). General practitioners are a dime a dozen and just read from a script so they can be replaced first.

You think human doctors are better? What human has ever scored 300 (100%) on a medical exam? NONE. ZERO. ZIP. NADA. But an AI system just did:

https://boriquagato.substack.com/p/medicine-goes-ai