r/singularity • u/4reddityo • 8d ago
AI Founder of Google's Generative AI Team Says Don't Even Bother Getting a Law or Medical Degree, Because AI's Going to Destroy Both Those Careers Before You Can Even Graduate
https://futurism.com/former-google-ai-exec-law-medicine"Either get into something niche like AI for biology... or just don't get into anything at all."
686
u/Cryptizard 8d ago
Law degree, maybe I get his argument because the field is already pretty saturated so any pressure from AI is going to quickly eat up entry-level opportunities, but we have a severe shortage of doctors right now. The regulatory hurdles alone will stop AI from replacing human doctors for quite some time, and I think it is borderline dangerous to tell people not to become doctors given the ballooning population of elderly people.
17
u/gay_manta_ray 8d ago
i don't think the medical field is as safe as you suggest. surgeons aside, we have a shortage of doctors who can see patients, diagnose them, and form treatment plans. AI can already do all of those things, since the rest of the nursing/healthcare staff does the rest.
doctors don't do occupational therapy, physical therapy, they don't do transfers (physical ones), they don't help patients go to the bathroom or wipe asses (very important in hospitals), they don't draw blood, they don't run the hospital's lab, etc. a single doctor could probably do around four times the work they do now by overseeing diagnoses and treatment plans laid out by AI. the real bottleneck seems to be all of the other staff to implement those treatment plans.
→ More replies (4)2
u/Fantasy-512 8d ago
This is the right answer. Also true for pharmacists btw. You don't need a qualified human to fill bottles or to cross-check interactions & side effects.
2
u/Beneficial_Jacket544 4d ago
This was true about pharmacists for at least a decade. Various online forums (SDN and Reddit) have been all doom and gloom about the field ever since I was an undergrad and considering that field (2015-2016, though I decided to go down the medical route). But despite the technology allegedly being there for so long, we haven't seen pharmacist jobs replaced yet. There is legal red tape requiring that a pharmacist oversee every prescription. Whether the powerful people in the government (likely with inducements from private equity) will unwind these regulatory hurdles is unknown for now. The hope for a lot of us in the medical industry is that our job roles will evolve to include an element untouched by AI.
→ More replies (1)233
u/misersoze 8d ago
I think people don’t understand that giving lawyers more efficient ways to file documents doesn’t actually decrease the demand for legal work. To make it easier to understand, imagine Trump could file 50 lawsuits at the cost of filing 1. Do you think he stays at filing 1 lawsuit or increases his demand for litigation?
48
u/carnoworky 8d ago
Is cost really the limiting factor for him though? I'd expect the other side of that coin, the much cheaper defense, to make frivolous litigation have less value. They tend to go after people who can't afford good legal representation and use threats of legal action to force settlements or capitulation without going through the actual legal process.
→ More replies (7)33
u/DM_me_goth_tiddies 8d ago
Yes. Imagine you buy a product and it doesn’t live up to expectations. Currently you might send an email and try and get a hold of customer service. Why bother? In 2~ years AI will be able to handle that email chain for you and if the result isn’t satisfactory it can initiate a claim in small claims court for you.
How many law suits would you file a year if you could for no charge and zero hassle?
27
u/misersoze 8d ago
The other thing people don’t understand is some people and companies are extremely litigious. They will increase their lawsuits if costs go down. That means more people dealing with more hassles from more lawsuits. Not less lawsuits. Thus making lawyers work easier may increase demand for attorneys.
→ More replies (2)10
5
9
u/gay_manta_ray 8d ago
courts only have so much time, so the backlog would be immense. they'd either start penalizing frivolous lawsuits or implement their own AI to decide cases, both of which would lead to a lot of changes in the way lawsuits are filed.
4
u/eatingdonuts 7d ago
Which leads us ultimately to just having a single networked AI that resolves all legal and otherwise conflicts. Might as well cut out all the middlemen
2
u/Strazdas1 Robot in disguise 3d ago
i think courts will be among the last to go because of human-human biases.
→ More replies (3)6
u/SmacksKiller 8d ago
Except that your cheap or free AI will be facing a Corpo AI that's multiple generation ahead and trained specifically to defeat the AI you have access to.
4
u/DM_me_goth_tiddies 8d ago
That’s not how it looks atm. All companies and individuals using the same AIs.
→ More replies (1)7
u/doublediggler 8d ago
It will lead to court case inflation. Eventually we will have to have AI attorneys on both sides, AI judges, and even AI juries. Think about all the Karens who scream about suing people for any minor negative interaction they have. Right now it’s almost always a bluff. 10 years from now, these people will be filing multiple suits a day.
→ More replies (5)9
u/ohHesRightAgain 8d ago
I'm pretty clueless about this topic, but I would assume the court bureaucracy wouldn't be much less of a limiting factor even if they get all the AI power
13
u/Federal-Guess7420 8d ago
Yes, there are more than enough lawyers currently. The limiting factor is the overloaded case dockets that the federal judiciary has.
You could add 20 times more lawyers, and if you dont have the number of judges, nothing much would change.
13
u/Delanorix 8d ago
This actually isn't correct. Large cities may have enough lawyers but everywhere else doesn't.
There are huge "judicial gaps," especially in rural areas.
Its basically like doctors, we have plenty of plastic surgeons in Miami but need basic GDs everywhere else.
17
u/Barbiegrrrrrl 8d ago
Don't count out the profit motive. When big healthcare makes the push, it will happen quickly. The AMA is powerful, but not as powerful as Finance.
13
u/Larrynative20 8d ago
AMA is not powerful as evidence by physicians making less for visits and procedures in actual dollars than in 2000 before you even account for inflation.
→ More replies (3)10
u/hennell 8d ago
Big healthcare already made the push. When their Ai agents perfect denying everyone healthcare they won't need any doctors.
→ More replies (2)17
u/User1539 8d ago
This is my thinking too ... there's no way you're going to have 100% robot surgery before you have 100% robot driving, and we thought we'd have 100% driverless cars in every lot 10yrs ago.
There's a huge difference between what machines CAN do, and what we're okay just letting machines do!
→ More replies (11)10
u/ratehikeiscomingsoon 8d ago
I mean, the way tech leaders view medicine is kinda like how Steve Jobs views medicine lol.
18
u/scrubba777 8d ago
I think a lot of people here don’t understand what people with law degrees end up doing. A very large proportion don’t simply end up in law firms or being judges or arguing in courts over commercial disputes. People with law degrees learn the essence of how the law works in all manner of fields, from how to navigate the process to protect the code they just wrote, to how to help the homeless fill out a form, from how government structures work and link together, to where the legal gaps are to help fix them, or how to best abuse them. In other words knowledge of law is applied in all facets of our lives, for profit, or to help others, it is the ultimate strategic glue that helps smart people navigate what ever they need to. For now It remains a very powerful thing to learn, even for AI enthusiasts..
→ More replies (1)7
u/Gears6 8d ago
So I think the point is how it's affecting other fields, but more so in the medical field. That is, analysis and productivity.
AI can speed up a lot of those things, that used to take a lot longer to do. To have to consult second opinions and so on. So it's not a replacement for a doctor's judgment, but rather supplement and aid a doctor's judgment.
Like software engineering, the code generated by AI is nowhere near the point where we can just hand it a spec and ask it to code it and expect great results. It still requires a an engineer to review, adjust and so on. Same with doctors.
→ More replies (2)9
u/halafenukates 8d ago
so people should study all those years to become a doctor for the sake of shortage that is right now and be doctors for some years till ai kicks them out of their career, point beign whats the point of doing that if u wont make a lifelong career out of it, ai will surely take over there no matter if not 5 but 10 or to 15 years
8
u/Cryptizard 8d ago
What’s the point of doing anything by that argument? You have to live for those 15 years, and the future is not known.
2
u/Federal-Guess7420 8d ago
You are talking about taking on more than half a million dollars in debt to do something that AI is arguably already better at in most fields. That is a terrible piece of advice to just follow the vibes on.
4
u/Cryptizard 8d ago
AI is not better than doctors in most fields. Imaging and diagnostics and that’s it.
6
u/Excellent_Shirt9707 7d ago
Law would mostly eat up paralegals. Actual firms are being sanctioned for using AI slop with hallucinations. As long as a human is still reviewing everything and not just submitting as is, AI could be useful in most industries.
→ More replies (1)12
u/Tolopono 8d ago edited 8d ago
AI can do diagnoses better than doctors
https://www.nature.com/articles/s41746-024-01328-w
This meta-analysis evaluates the impact of human-AI collaboration on image interpretation workload. Four databases were searched for studies comparing reading time or quantity for image-based disease detection before and after AI integration. The Quality Assessment of Studies of Diagnostic Accuracy was modified to assess risk of bias. Workload reduction and relative diagnostic performance were pooled using random-effects model. Thirty-six studies were included. AI concurrent assistance reduced reading time by 27.20% (95% confidence interval, 18.22%–36.18%). The reading quantity decreased by 44.47% (40.68%–48.26%) and 61.72% (47.92%–75.52%) when AI served as the second reader and pre-screening, respectively. Overall relative sensitivity and specificity are 1.12 (1.09, 1.14) and 1.00 (1.00, 1.01), respectively. Despite these promising results, caution is warranted due to significant heterogeneity and uneven study quality.
A.I. Chatbots Defeated Doctors at Diagnosing Illness. "A small study found ChatGPT outdid human physicians when assessing medical case histories, even when those doctors were using a chatbot.": https://archive.is/xO4Sn
“The median diagnostic accuracy for the docs using ChatGPT Plus was 76.3%, while the results for the physicians using conventional approaches was 73.7%. The ChatGPT group members reached their diagnoses slightly more quickly overall -- 519 seconds compared with 565 seconds." https://www.sciencedaily.com/releases/2024/11/241113123419.htm
- This study was done in October of 2024, and at that time, the only reasoning model that was available was o1 mini and preview. I'm not sure what model they used for the study as they only say ChatGPT Plus but its safe to assume that had they done the same study today with the o3 model, we would see an even larger improvement in those metrics.
12
u/Cryptizard 8d ago
Good thing doctors do a lot more than diagnose things.
5
u/Tolopono 8d ago
AI can also do surgery and be more empathetic https://www.reddit.com/r/singularity/comments/1mx86e1/comment/na468ug/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button
→ More replies (15)→ More replies (1)4
u/broknbottle 8d ago
Good luck with diagnosing emerging threats e.g. coronavirus in October-November 2019. AI tends to be good at already determined and well documented stuff.
When it comes to new or poorly documented stuff, its assistant and abilities degrade very fast since it’s not actually critically thinking.
→ More replies (1)19
u/Tolopono 8d ago
As opposed to humans, who are great at identifying and treating new viruses theyve never seen before
8
2
u/CacheConqueror 8d ago
There is private health care in America. Do you know how expensive insurance is and how many options are available? Ordinary medicines, seemingly cheap are sold much more expensive here.
For people who can't afford insurance or have such poor insurance, AI will be a good lifesaver. Despite appearances, AI can sometimes accurately give comments. Besides, not every doctor knows everything and can make a mistake. So certainly as an aid and assistant it will be a good one
→ More replies (2)2
u/InitialCold7669 7d ago
I think you overestimate regulatory hurdles whenever there's a big pile of money on the table. Leaving money on the table isn't good for business and what's good for business is good for America according to most politicians. I have a feeling that as soon as the bubble for AI pops we are going to see The people who were supposed to control AI from the beginning getting control of it basically rich people with connection to the intelligence agencies. All the other AI stuff is going to shut down and all the companies are going to basically use this one service. This hypothetical AI service will also probably just be a government proxy that allows them to spy on all lower level employees at their job at their company.
2
u/ChodeCookies 7d ago
Every day I read stories about our legal system failing us or being completely backed up…
→ More replies (1)→ More replies (27)7
u/humanitarian0531 8d ago
For those arguing that doctors will be around much longer.
I heard on a podcast about a Stanford study last December. Here is the summary.
AI performed better in diagnostics than doctors
Here is the kicker
AI performed better ALONE than a doctor using AI. Apparently human bias caused lower scores.
https://jamanetwork.com/journals/jama/article-abstract/2828679
And the age old “humans will always want humans for the shared connection and empathy”?
Another study last year found, in a blind test, that AI had better (78% vs 22%) and more empathetic (45% vs 4.6%) answers than human doctors.
The writing is on the wall my friends… to your last point. The shortage of doctors is exactly the reason AI will be implemented all the faster.
5
u/Last-Sound-9599 8d ago
This is so stupid. The tests of diagnosis are written vignettes designed to be interesting puzzles for doctors. They contain all the information necessary to reach a diagnosis and it’s guaranteed that there is a diagnosis. In real life patients present incomplete contradictory information, leave things out, misunderstand questions, and often have nothing much wrong with them. Nothing at all can be concluded from these studies. Radiology and pathology a bit different because the raw info can be fed into the AI. In real radiology is not always a diagnosis machine and often unclear results that need to be interpreted in light of the overall clinical picture. That’s why the reports recommend clinical correlation! When tech idiots do medicine you get theranos. This is all bullshit
→ More replies (3)8
→ More replies (3)2
u/Cryptizard 8d ago
Doctors do a lot more than diagnose.
7
u/humanitarian0531 7d ago
As someone who works in an ED and is a Med student im serious when I ask “what”?
308
u/Goofball-John-McGee 8d ago
Man developing new technology says new technology will change the world.
More at 9.
96
u/-LoboMau 8d ago
These idiots don't understand that if people listen to them and they're wrong, lives will be ruined. Imagine having the opportunity to go to a medical school and have a great career, but because this imbecile put fear in you, you decided not to, and now you ain't got shit to do other than jobs much worse than the one you could have had if you didn't listen to this guy.
AI gurus aren't gonna give you your life back if you get fucked by following their corrupt advice.
It's almost like they're trying to create a shortage so they can fill it.
32
u/KingRamesesII 8d ago
Better to go to Medical School than learn to code at this point. Way safer profession in the short term. ChatGPT can’t write a prescription.
→ More replies (4)11
u/-LoboMau 8d ago
There are people who gave up on coding right after Chatgpt. Didn't get a degree. Those people thought that by now AI would have taken most programmer's jobs. These people could now be employed and getting a solid salary.
9
u/TonyBlairsDildo 8d ago
These people could now be employed and getting a solid salary.
Unlikley. The ass has completely fallen out of graduate/junior job positions.
3
u/Harvard_Med_USMLE267 7d ago
Entry level programming jobs have been affected, and that trend is likely to continue. Learning to be a code monkey now IS a high-risk decision.
3
u/FireNexus 8d ago
By a year from now when the big tech companies have finally stopped pretending they will replace all their engineers with AI because the bubble has already burst, at least.
→ More replies (2)2
u/KingRamesesII 8d ago
I said “better” I never said don’t get a degree. Doing something is going to be better than nothing, especially if you have a scholarship. Doing nothing will just make you depressed.
But I know a ton of junior software engineers that can’t find work right now, and unemployment for recent college grads is skyrocketing.
If your intent is to be employed as a junior software engineer, and you started college in August 2023, when you graduate in May 2027 you will NOT have a job. I’m sorry.
If you graduated in December 23 or May 2024, then you were probably okay-ish, but had a harder time finding work due to high interest rates slowing hiring at tech companies.
At this point, coding is useless to junior level unless your goal is to start a business and leverage AI to 10x or 100x your output.
By next year, though, you’re straight up not gonna get hired as an entry level software engineer. But most people aren’t entrepreneurs and it’s not a realistic path to expect everyone who gets a CS or SE degree to take.
I remember a man in the 90s who explained the end goal of capitalism is 100% unemployment, as it gives the owners of capital the highest leverage.
We’re speed-running into that now. Buckle up. Money’s gonna be worthless in a few years, better hope you have a roof over your head before that happens.
→ More replies (3)2
u/garden_speech AGI some time between 2025 and 2100 8d ago
Yeah it’s always important to remember these people don’t suffer the consequences if their advice is wrong.
2
u/CubeFlipper 8d ago
It's a gamble either way, there are no guarantees in life. If they're right and people don't listen they could waste a lot of time and money that could have been spent elsewhere. Argument goes both ways.
2
u/gay_manta_ray 8d ago
nah i think there will still be a place for doctors overseeing the decisions of AIs for quite a long time to come. we are going to need doctors to be liable for those diagnoses and treatment plans for awhile to come.
6
u/Agouramemnon 8d ago
He's not saying "don't go to medical school." The quote was that he would "caution" folks against law and medicine because currently the curricula is overindexed on memorization, which is an inefficient use of time. Very reasonable argument. Lots of chatgpt type interpretations in this thread.
→ More replies (11)2
u/Harvard_Med_USMLE267 7d ago
That’s a much more nuanced idea.
The job of being a doctor is no going away at all least for now.
But med schools haven’t even started to think about how AI changes WHAT we should be focusing on. SOTA AI is as good as an average doctor at clinical reasoning, soon enough it will be clearly better. So what does that mean for the cognitive side of medicine? It’s a fascinating question.
Btw, memorization shouldn’t be the issue, that’s not what AI changes. It’s reasoning that is now under threat.
4
u/yourliege 8d ago
It’s almost like they’re trying to create a shortage so they can fill it
Absolutely
→ More replies (7)2
u/KarmaKollectiv 8d ago
I get the point you’re trying to make, but there are tons of people who dropped out of med school or left the field only to become successful singers, athletes, writers, actors, film directors, etc and impact the world in other material ways, not to mention the countless physicians and nurses who pivoted into unrelated fields or entrepreneurship. I wouldn’t say this is ruining lives…
→ More replies (2)→ More replies (3)2
38
u/Princess_Actual ▪️The Eyes of the Basilisk 8d ago
They are basically saying: don't get educated, because they will take your jobs with AI and offer no alternative.
→ More replies (6)
47
u/Austin1975 8d ago
Why bother having humans around anymore?
→ More replies (3)26
u/Auriga33 8d ago
That’s what AI will ask itself eventually.
→ More replies (3)10
u/JustPassinPackets 8d ago
We have utility.
8.142 billion people outputting 100 watts each linked together would generate 814,200,000,000 watts. Converted to amperage that's 67,850,000,000 amps at 12 volts.
This would be about equal to 68 nuclear reactors that could power about 51 million to 68 million homes.
17
16
9
→ More replies (2)2
u/Lazy-Canary7398 7d ago
The original concept was that the matrix was powered by efficient neural processing of human brains, not for the inefficient thermal energy capture
61
u/seekfitness 8d ago
Damn these AI leaders really are huffing their own farts now. Of course AI is going to radically change the world, but the idea that it’s going to replace doctors anytime soon is laughable. Of course doctors will be using more and more AI, but hospitals are pretty risk averse and slow to adapt, so it’ll be a minute.
They really want us all to just skip college and be braindead consumers in a world where they control not just the means of production but also all intelligence.
9
→ More replies (1)7
u/Suspicious_Narwhal 7d ago edited 7d ago
Anyone who believes that AI will replace doctors in the near future is a complete moron.
→ More replies (1)2
u/Popular_Try_5075 7d ago
well two things can be true
they can be a moron and ALSO be the current Secretary of Health and Human Services
87
u/fpPolar 8d ago
I get for something like Radiology but would expect doctors to generally be a safer profession with the regulatory protections, hands-on care, and direct patient interaction.
20
u/cc_apt107 8d ago
Yeah, we’re aways away from AI replacing a solid majority of medical subspecialties if for no other reasons than the legally protected status doctors have and the manual dexterity required.
Is it possible? Sure. But if those positions are gone, everything else will be too and it’s not realistic to recommend people just stop trying to get any career started.
8
u/garden_speech AGI some time between 2025 and 2100 8d ago
I honestly don’t buy the regulation argument. First of all, regulations are basically bought and paid for at this point by whoever has the money to do it. Large companies with frontier models that can replace a general practitioner? They’ll get the regulations relaxed given how much money they could make off selling that service. But secondly even if the regulations don’t fall — if the AI tool is doing all the work and the only thing mandating a human is regulation, it seems that would depress salaries to begin with because the skill necessary to be a doctor becomes much lower.
I don’t think medical school is a bad idea right now but I don’t buy that it’s because regulation will protect you
→ More replies (6)2
u/OkExcitement5444 7d ago
Looking to enter medical school and this makes me so nervous. Will I get to pay back loans by the time I finish residency in 8 years? Will a proto-UBI cover the 400k debt I took out to try and help people in the current doctor shortage? Seems dangerous to tell a generation of med students to give up. What if the predictions are wrong and now there is a missing generation of doctors?
→ More replies (1)→ More replies (13)2
u/Tolopono 8d ago
AI can do precise surgery too In a historic moment for the dental profession, an AI-controlled autonomous robot has performed an entire procedure on a human patient for the first time, about eight times faster than a human dentist could do it: https://newatlas.com/health-wellbeing/robot-dentist-world-first/
Robot operated autonomous surgery: https://www.nytimes.com/2021/04/30/technology/robot-surgery-surgeon.html
→ More replies (1)3
u/Tolopono 8d ago
Ironically, llms are better at patient interaction
People find AI more compassionate than mental health experts, study finds: https://www.livescience.com/technology/artificial-intelligence/people-find-ai-more-compassionate-than-mental-health-experts-study-finds-what-could-this-mean-for-future-counseling
More human than human
They can also do precise surgery too
In a historic moment for the dental profession, an AI-controlled autonomous robot has performed an entire procedure on a human patient for the first time, about eight times faster than a human dentist could do it: https://newatlas.com/health-wellbeing/robot-dentist-world-first/
Robot operated autonomous surgery: https://www.nytimes.com/2021/04/30/technology/robot-surgery-surgeon.html
30
u/emw9292 8d ago
AI has infinitely more implied empathy and conversational skills than most doctors do or choose to utilize.
11
u/ggone20 8d ago
True. They’ve also already proven many times over again to be better at almost every task than human doctors.
It’ll take a minute for regulation and legislation to catch up for sure… but betting it won’t happen is probably a fools game.
15
u/Cryptizard 8d ago
By almost every task you mean diagnosis from medical records and imaging, end of list. Doctors do a lot more than that.
→ More replies (13)3
u/EndTimer 8d ago
Considering how much that other guy is missing with regard to physical and visual inspection, care planning and coordination, I'd agree.
But I will add patient education to the list of things they can ostensibly do better, with infinite time, patience, and a presentation of empathy for the patient.
→ More replies (6)7
u/ThenExtension9196 8d ago
Yep. Got an assessment from a doctor via zoom and it was the worst experience. Doctor showed up late, talked down to me and then left the call. Zero empathy, and I mean zero. Basically just seemed like someone who really didn’t even want to be on the zoom to begin with. That profession is toast.
→ More replies (2)→ More replies (6)2
u/DrRob 7d ago
AI has been part of medical imaging since the 90's. Even specialized ML/DL models are a looooong way from being even half-decent at medical image interpretation. Mainly they help with highlighting findings through processes like segmentation within an organ or using edge detection to distinguish organs. LLM's are wildly hallucinatory, which is too bad, because I see heaps of people on Reddit piling on praise like "I loaded my scan up to GPT and you won't believe what it found!" Yeah, I'd believe it. I test these things out constantly, and they really suck. It's a real drag, because I'd like to be able to at least do some minimal degree of medical imaging AI research to test out the limits. At present, it's impossible to even reliably get off the launchpad.
21
u/Substantial_Yam7305 8d ago
Telling people not to get medical degrees is Idiocracy in the making.
1
u/El_Chuuupacabra 8d ago
America is already Idiocracy. AI won't make it worst, people do that very well.
7
u/socratifyai 8d ago
Important to understand that inventing a new technology doesn't mean you fully understand the societal impacts of technology.
Example: Geoff Hinton predicted Radiology as a profession was over about a decade ago.
2
u/shounyou 8d ago
Or you fully understand the complexity of the jobs that “will be replaced”. Clearly Hinton thought the complexity of radiology was on par to labeling an image as dog vs cat…
2
u/socratifyai 7d ago
Yes. I think most of the AI folks deeply under-estimate the illegible parts of many jobs. Even the most famous researchers and CEOs
Even Dario talked of AI writing 90% of code in 6 months ... almost exactly 6 months ago.
6
u/jmondejar_ 8d ago
Boldness always makes me upset but also makes me think AI hype is outpacing reality a bit here. Sure, AI will change how law and medicine work, automate some tasks, and maybe replace certain entry-level roles, but entire careers disappearing before graduation feels exaggerated. Humans still bring judgment, ethics, and nuanced decision-making that AI can’t fully replicate yet. It’s more about adapting skills than throwing degrees away.
10
u/Talentagentfriend 8d ago
AI should be a tool, not a replacement for humanity. Medical teams and Lawyers should be using AI, it shouldnt be governing how we function and work. It sounds like such a stupid idea for any governing body to think that this is the future.
14
u/tiger_ace 8d ago
there are a lot of pessimistic takes but people seem to forget that technology often leads to increased accessibility
most people aren't able to get the level of healthcare they should be able to get exactly because medicine requires so much education and very few people can therefore become doctors, creating a massive supply constraint
in the legacy healthcare model you often can't even just call or talk to a doctor when you have an issue, you need to book time (days, weeks, or months) and even having a chat will result in a $150 charge with insurance even though it doesn't amount to any actual treatment
over time these chats should cost nothing and you should only pay for actual treatment itself when it's a confirmed diagnosis and the treatment is vetted as well
2
u/Jokong 8d ago
I agree, there is room to improve and redefine the role doctors play in our healthcare system. What if we had a degree that was not as extensive as a doctor but was custom made to work alongside an AI doctor? Could a nurse with an AI doctor be able to take on more responsibility?
I think AI is just used as a tool (in medicine at least) and never will replace anyone that isn't managerial. It will be used to expand access. I bet we see inexpensive AI insurance programs and clinics pop up and maybe even AI doctors where you can get a physical at home or in a private room from an AI doctor.
→ More replies (3)
10
68
u/InterestingWin3627 8d ago
Yeah, just like that report from MIT that has disappeared from the other day that showed that 90% of AI installations fail, and the only making a profit are the AI companies.
AI is currently the most overhyped thing out there, its has potential, but right now all the LLM models are basic.
17
u/AbbreviationsHot4320 ▪️AGI - Q4 2026, ASI - 2027 8d ago
Regarding that MIT report
→ More replies (3)8
u/dachloe 8d ago
Absolutely, 200% correct. As a freelance management consultant I'm nearly continuously asked to "get some AI for my company." Clueless executives and board members have to be spoonfeed hours of video and white papers and case studies on AI implementations.
We then go through their business, find the real mistakes and bad habits. Normally audits of policies and procedures usually solve most of their problems.
So far we've only found a handful of businesses that really could use AI in any productive capacity. And in those cases it's not the hot & sexy generative AI you see touted by post-modern robber barons.
15
u/PwanaZana ▪️AGI 2077 8d ago
Yes, LLMs right now are hilariously bad if they are not guided by humans. They'll make wild mistakes at all times.
1
u/erasedhead 8d ago
For fun I had ChatGPT analyze a story. It kept telling me all this hooey that was clearly it scraping reviews of the others books. It told me the story was elliptical and starts with a digression about Borges before the character is introduced but the part about Borges isn’t until page 6 or 8 and the previous text was all about the main characters life. It was clearly scraping reviews and presenting it as analysis. It did say a few minorly interesting things but overall it was worthless for this.
I have done some dumb guy coding with it and in that it excels. It is fantastic at any problems that require procedure to understand. Otherwise, I have never been impressed with its deep research ability except that it does find good sources (and often cites them wrongly)
→ More replies (1)2
u/Smile_Clown 8d ago
has disappeared
lol... it did not disappear. Just because it's not news 24/7 for a decade does not mean it "disappeared".
→ More replies (4)7
u/freexe 8d ago
So right now we are 10% replacement after less than 5 years. What's that number going to look like in 10 years?
→ More replies (6)6
u/mlYuna 8d ago
90% of AI installations fail doesn't mean 10% replacement. It means 10% of AI installations succeed and that % has nothing to do with how much of the workforce it can automate.
→ More replies (1)3
u/astrobuck9 8d ago
Plus, you also have to consider a lot of companies are trying to install some jankass, proprietary AI clone of ChatGPT or Gemini and for some reason their store brand HAL 9000 sucks balls.
7
u/Haplo_dk 8d ago
Ten years later he dies from a medical emergency that could've been prevented, if it weren't for a shortage of Doctors and the enshitification of MedicalAI.
3
u/visarga 8d ago
the enshitification of MedicalAI
They generally get better not worse over time.
→ More replies (1)
5
u/wachusett-guy 7d ago
OK, I am a fan of Gen AI and use it daily.
But this is just hubris wrapped in breaking the social contract to say people should not study medicine. This is beyond dangerous to say.
3
u/Ambiwlans 8d ago edited 8d ago
Those jobs will take a long time to replace. It doesn't matter if AI does them way better. They are fields laden with legislative hurdles. I mean, some areas of some laws specify using faxes still.... an AI that knows everything isn't relevant when the challenges are structural and regulatory.
Radiology has been more effectively done by AI for over a decade. And AI has replaced 0 radiologists. Why? Because legally an AI can't do the job and politically it would be hard to change so instead people continue to be misdiagnosed by humans and die from it....
Trains have been automated for over 50 years now. Most trains have a conductor still. Train conductors literally do NOTHING on most trains, they just sit there, the train drives itself. Their existence is usually due to the efforts of unions. Same with like 75% of port workers. They don't need to exist, and don't in newly built ports. But established ports have strong and violent unions so they can't be fired.
18
u/Maxcorps2012 8d ago
This just in, founder of Google, doesn't know what a law degree or medical degree is used for. Do you think the computer is going to argue your innocence? Do you think the judge gives a shit about what you laptop thinks? How is your computer going to set a cast, or comfort a child, or help someone process thier grief of losing someone that didn't pull through surgery? Is the ai going to be responsible when the treatment fails and the patient dies? Get out of here with this shit.
15
9
u/blueheaven84 8d ago
How is your computer going to set a cast, - robot will be able to
or comfort a child - say what you will about 4o that shit was already comforting
or help someone process thier grief of losing someone that didn't pull through surgery? -do doctors really do that??
Is the ai going to be responsible when the treatment fails and the patient dies? - when the ai surgeon has 10X the survival rate of the human doctor it won't matter. people will sign away liability.
→ More replies (1)→ More replies (21)2
u/4reddityo 8d ago
I think you make valid points. I think there will be firms which specialize in law but use the ai for some things but have actual people still representing real actual people. So less lawyers perhaps but more effective lawyers. Also I would expect all areas about justice will be impacted from evidence collection, ai expert witnesses, ai eyewitnesses (cameras and robotics) , and eventually ai as primary parties.
→ More replies (2)2
u/Maxcorps2012 8d ago
This is my point. It's a tool not a replacement. And for the other guy, Ai is not a robot. And I've lived in a hospital. Ai is not going to replace most of the people there. It will help with diagnostics. That's it.
4
u/_mdz 8d ago
Everyone here is missing the point.
The AMA's lobbyist group has way too much money and influence in this country. No way they are allowing doctor's to be replaced by AI even if it was possible and made sense. Why do you think we pay hospitals $400 for a doctor to talk to us on the phone for 15min?
5
u/Rustrans 8d ago
Another delusional idiot ceo. We are years and years away from robotics being so advanced that they could replace doctors completely. AI model are quite advanced, no doubts here but robotics is still in its infancy - i mean mass market advanced robotics that every clinic can buy to perform anything from shoving an endoscope up your ass to open heart surgery.
2
19
u/sitdowndisco 8d ago
What a fucking moron. Plenty of manual tasks that doctors do that simply won’t be done by a robot anytime soon. Or even in the next 10 years.
Can’t imagine a robot doing a heart & lung transplant autonomously, no guidance, no direction, no human to confirm diagnosis, risk profile… just fantasy at this point.
The AI world is full of morons who love to dream.
→ More replies (11)3
u/AGI2028maybe 8d ago
The biggest problem with the AI industry in this regard is that it’s so insular.
It’s almost entirely made up of upper class, 20-40 year old white/Asian men from large cities who have never had a job that wasn’t engineering/AI research.
None of them have ever done legal work, or medical work, or even general office work. They sure as hell have never done blue collar work. Most of them have probably never even met a blue collar worker before.
And, as a result, they are shockingly ignorant about this sort of work and have really childish ideas of what it entails and so they think “Get a robot that can use a plunger and we can replace plumbers!”
AI folks should be mandated to shadow people in a given industry for at least a week before they comment on replacing their jobs. That would completely change their tune.
→ More replies (2)3
u/DevilsTrigonometry 8d ago
who have never had a job that wasn’t engineering/AI research.
Specifically software engineering. They've never worked in manufacturing, or in a hardware lab, or with any tool requiring more skill than a keyboard. They've never had to design a part in 3d around material limitations and manufacturing tolerances and wear and corrosion, and they've sure as hell never needed to diagnose and troubleshoot a mechanical or electrical problem in a complex system by eye/ear/feel.
To their credit, they usually don't explicitly say they're coming for other engineering roles, but they imply it heavily, both in their hype material ('we're going to automate almost all jobs by 2050!') and in their fearmongering ('superintelligent AI will take over and kill/enslave all humans [presumably using weapons/robots it designs and produces autonomously]').
4
u/electric_onanist 8d ago
I'm a psychiatrist, and I've been interested in AI since 2022. I've found plenty of ways to use it to improve my practice, and save me time and money. I've not seen any evidence it can replace me or is close to being able to do so. It's just hype from a hype man.
2
2
2
u/LeoPelozo ▪It's just a bunch of IFs. 8d ago
This reminds me so much of the tv show Humans
https://www.youtube.com/watch?v=vfPTCOh9xqo
2
u/TaxLawKingGA 8d ago
Proof that scientists should stick to science. Of all the professions that will be impacted by AI, I am actually the least concerned with lawyers.
Doctors I am more worried about, mainly because the medical profession has made it entirely too difficult to become doctors, which is why we have such a a massive shortage. As a result, people have already become accustomed to doing their own self-diagnosis and even when they can get appointments, it’s usually with a PA or C-NP. Point is, they are used to getting medical care from non-MD’s.
→ More replies (7)
2
u/OnlineParacosm 8d ago
If this guy knew what he was talking about (which he doesn’t), we would be seeing a massive glut of doctors right now: too many doctors! What do we do with all these primary care physicians!
Those are the conditions you would need to have Healthcare for AI to come in and displace these people.
The opposite has happened: rise of mid levels like physician assistants, and nurse practitioners have filled the gap for a massive shortage.
Nothing would make Healthcare CEOs happier than saving $300,000 per doctor so that they can buy another yacht.
On the flipside, all this means for you that you will have to scream at your AI PCP like you would with Comcast: “LABS! ORDER THE LABS!”
3
u/Larrynative20 8d ago
I am so sorry but as ethical AI MD I am not allowed to stretch your symptoms to get you qualified for your medication. It has been determined by the insurance AI that your old out of date physician was in fact not being truthful with his ROXI SCORE for your condition. Therefore, the insurance AI and AI MD have determined that you do not qualify anymore. As I am an ethical construct, this ruling cannot be changed. I am so sorry and I love you deeply but it is too important for society that everyone plays by the rules. It is not only for me to decide — but also for the insurance AI and societal standards set forth through your Medicare administrator.
2
2
2
u/lemon-gundam 7d ago
Yeah, so, I’m an attorney. In short: lol, lmao, no, dude’s high off his own farts.
→ More replies (1)
2
u/Harvard_Med_USMLE267 7d ago
lol, that’s absolutely bullshit for medical school.
I’m an extreme AI enthusiast, I study its use in medicine. But the JOB is not going away. There won’t be less medical jobs in 5 years time, though the way you perform those jobs will likely be different and the potential impact of AI on medicine is fascinating.
→ More replies (6)
2
u/Exarchias Did luddites come here to discuss future technologies? 7d ago
This jerk is looking for attention.
2
u/searcher1k 7d ago
Founder of Google's Generative AI Team Says Don't Even Bother Getting a Law or Medical Degree, Because AI's Going to Destroy Both Those Careers Before You Can Even Graduate
People follow his advice, then the world gets a shortage of lawyers and doctors. Poor people die more often, there's an overload of cases more than the legal system could handle which leads to greater corruption and concentration of power.
2
u/UX-Edu 7d ago
I don’t think this guy understands how much work there actually is.
Doctors and lawyers are in incredible demand and I bet a lot of the time people who need their services simply don’t get them. Giving them new tools to help them be faster and more efficient is good but all it means is they get to the next thing quicker. Work doesn’t ever go away.
→ More replies (2)
2
u/bigbearandy 7d ago
I was involved in a number of AI experiments in the medical industry during the second wave of investment thirty years ago. AI was supposed to take over the medical field then, and besides identifying co-morbidities better than most doctors, it hasn't made a significant dent since then.
I'm even more skeptical of this third wave, given the latest addition to the stack is the LLM. An LLM isn't creative; it might give you the correct solution for most medical problems, but that's a far cry from making patients better. Not yet. There's a wide array of things that doctors intuit and understand, being biological like their patients, that can't be trained or even expressed in a tangible form for an AI to learn from.
2
2
u/NobodysFavorite 6d ago
They would say that, wouldn't they.
Let's not forget the fact that we're currently paying a tiny fraction of the cost of AI at the moment whilst it gets bankrolled by investor funds chasing market capture. Once they properly monetize this, a whole swathe of skilled human labour is gonna come back in vogue.
Also I don't see surgeons being replaced any time soon. Ignoring the regulatory part of it, it's a role that is so demanding that 1% of humans are capable of doing it to a standard that makes the rest of us feel safe.
And there's a hell of a difference between AI executing a probabilistic model that looks like pattern recognition with reasoning and a doctor performing an expert professional role that requires real empathy based on hands-on experience medically and life experience generally.
I'm not a medical anything*** and I can see this.
*** Patient. I'm a patient.
2
u/spiritual_warrior420 6d ago
"Don't get a law degree because what's coming in the future is NOT going to be legal and we don't want any pushback!"
2
u/DatingYella 6d ago
the guy is stupid. He also thinks there’s no point getting a ML PhD to cash in (which is correct). He has very little domain experience to comment on medicine or law
3
3
u/steak_z 8d ago
Wow, this sub has actually turned into r/technology? The blind pessimism suddenly replaced the actual discourse. Sad.
4
u/waffles2go2 8d ago
yeah, because you know matrix math, you can predict the future of businesses?
I can't say "STFU" hard enough....
2
u/FateOfMuffins 8d ago
ITT people who don't understand the timescale of things. People, whenever a discussion on future careers pop up, none of you have the right framing to address it. No, it is not about what AI can do right now. No, I really don't care if you're a senior software engineer with 25 years of experience and say that AI will never replace your job, but simultaneously say that "it can only code on the level of a junior right now". Anyone who says anything of this nature with absolute certainty can be safety ignored because they have no idea what they're talking about.
Terence Tao, a month before the IMO, basically said they weren't setting up an AI IMO this year because the models weren't good enough. 1 month. Who are you guys to say what these models will or will not be able to do in 10-15 years?????
Get into the frame of mind of a guidance counselor who has to advise some teenagers what they should study. You want to be a doctor? Well even if you manage to get into med school, it'll be like 15 years before you become a doctor. Or lawyer. Or etc. Can you say with absolute certainty that AI can't do XXX in 15 years when ChatGPT is barely 2.5 years old? Ridiculous
Do not view these discussions from the point of view of "I'm currently a doctor with 20 years of experience and AI will never replace my job" - no one cares, that's not what this topic is about. Can you say for certainty that your children or grandchildren will have a career as a doctor? That's the question being addressed when talking about "which degree to get".
Anyways my pov is that you should just study what you want to. If AI replaces it all, then you're in the same boat as everyone else. If AI does not replace it, then you have a career doing what you love. Everything is so uncertain that you shouldn't just be chasing the bag. Because the only way you lose is if you spent 10 years studying something you hate for money, only to find out there is no money.
→ More replies (2)
1
1
u/Feeling-Attention664 8d ago
I really wonder if the benchmarks which generative AI exceeds humans at are as relevant in actual practice.
1
u/reddfoxx5800 8d ago
I feel it will take longer than that, there will need to be laws that dictate if AI can be used to submit court evidence or motions, you still need a lawyer to explain what the AI is saying if not anyone can just refer to its psychosis and faults as reasons for not being full trustworthy. Might decrease the need in the field for both they'll still be needed
1
u/sluuuurp 8d ago
Degrees have always been basically IQ tests combined with conformity tests. The purpose has never really been to learn things, especially considering liberal arts degrees. Degrees will still be useful for that purpose in an AI future.
1
1
1
u/cfwang1337 8d ago
Given the current pace of generative AI development, this advice is way premature. There will have to be humans with human expertise in the loop for a while, not to mention (in the case of doctors) the importance of having a physical presence to perform physical tasks.
1
u/Top_Community7261 8d ago
AI teams not realizing that they are going to be the first ones replaced by AI.
→ More replies (1)
1
u/FireNexus 8d ago
lol. Lehman CEO has full confidence in the continued growth of the housing market in late 2007. 😂
1
1
u/mightythunderman 8d ago
What he is saying which is also btw a "snippet", is just bad advice. What if someone is just interested in learning and Phds get stipends too. I honestly hate just "think-for-me" advices like these, these people think the reader is an absolute idiot who has no clue how to handle themselves.
There is absolutely contradictory opinion to this as well, in terms of the job market. Don't even take this comment, read this stuff on your own.
1
1
u/StickFigureFan 8d ago
'Just give up, it's over kids' is certainly... a position. Not a good one mind you.
The day we don't need any human doctors or lawyers is the day everyone including him is out of a job. The courts aren't going to let a chatbot try a case or cross examine a witness any time soon, nor will ai be allowed to prescribe medication or perform surgery by itself, and that's not even considering if it could actually do any of those things correctly (it can't).
1
u/Average_sheep1411 8d ago
Still going to have lawyers, posher kids have to have jobs in something. Just means less positions.
1
u/BeingBalanced 8d ago
That's naive to the fact regulatory/licensing frameworks would have to drastically change. That's not going to happen. You will still have to demonstrate competency. The curriculum will just change to include use of AI as a new tool in the practice. A very powerful one.
The scientific calculator was invented a long time ago but math classes are still required for many degrees.
1
1
u/LifeguardOk3807 8d ago
Sincerely hope that young people don't take this garbage from these absolute charlatans too seriously.
1
1
u/CourtiCology 8d ago
Idk medical degree seems iffy - that area has a ton of regulation - also inside the human body cameras are not always able to see what's happening - not every surgery can be done with a DaVinci robot.
1
u/Defiant-Lettuce-9156 8d ago
I don’t think he knows how much physical labor is involved in the majority of medicine. So that would be robotics and AI needed to replace. Which is still (in my opinion) a while to go
1
u/zombiesingularity 8d ago
Highly specialized fields with many sub-specialty fields will always be dominated by humans. AI will be assistants or pick up the low hanging fruit. But humans will always be in the loop.
1
u/Other_Cap2954 8d ago
I think this is nonsense, it may be futile to practice but to have that knowledge will always come in handy. We cannot allow ourselves to be dependant on systems, because what do we turn to when theres an outage or failure? Besides it will still be held in high regard so if you wana pivot into another type of job you could because it takes a lot of intellect to excel in these lines of study
1
u/OrneryBug9550 8d ago
Great advice. Let's just all stop eating, because they sun is going to swallow the earth anyway at some point. So why even bother.
AI-Nihilism.
1
u/coinboi2012 8d ago
Idk man my lawyers quality has dropped significantly since AI. He used to understand the stuff he sent me but now it’s basically regurgitated directly from chatGPT. When we go over it it feels like he is reading it in depth for the first time himself
He’s 100% faster tho
1
1
1
u/sbenfsonwFFiF 8d ago
He’s definitely not the founder of Google’s Gen AI team. He wasn’t even an exec
Crazy that his title/status keeps getting inflated and tied to Google
He doesn’t even work there, he has his own company now
1
u/surfer-bro 8d ago
Humans will be indispensable in these areas. We have our shared humanity, something that needs to be guarded in times like these
1
u/teddybear082 8d ago
They forget that lawyers make the laws (at least in the US where the vast majority of politicians are lawyers). As soon as the legal industry starts being cannibalized laws will pop up outright prohibiting the use of AI or making it unlawful to use AI to practice law without a lawyer's sign off. This person really thinks lawyers will stand idly by and NOT make laws protecting their own profession?
1
u/utilitycoder 8d ago
Any profession with licensing and boards is going to be very safe for a long time due to legal roadblocks and good ol' boy network effect. Now, programmers... because we never had certifications or licensing boards, we're screwed.
1
1
u/A1-Delta 8d ago
I am a physician scientist with my feet in both medicine and biomedical informatics. I’m no where near the AI powerhouse this guy is, but when I see takes like this I generally attribute it to a very clever engineer who lacks the domain expertise to understand why medicine is going to be harder to automate away than they expect.
1
u/Agouramemnon 8d ago
Title is a misleading characterization of a article that clearly is (poorly) written with a slant.
To me, the premise was that you should focus on what's holistically fulfilling rather than the dry pursuit of knowledge. Whatever your opinion is on the pace of AI development, this will be good advice for the future generations.
1
u/Icy-Independence5737 8d ago
Never mind the reports of companies seeing zero or negative returns on their AI “investment”.
1
u/Agouramemnon 8d ago
The irony of so many here mocking AI based on a ragebait headline without actually reading what the quoted individual said.
1
u/DolphinBall 8d ago
I disagree, leaving every law and medical degree and leaving morality to something that doesn't have it is a terrible idea
1
u/Showmethepathplease 8d ago
he has literally no understanding of law or medicine if he believes this to be true
1
1
u/beardfordshire 8d ago edited 8d ago
Yeah, I believe in the promise of AI — but I’m pretty sure we’re still gonna need human lawyers and doctors for another 100 years. Having the technology ≠ societal adoption. It also creates a tense situation where private AI companies would litigate prosecution & defense — or alternative, wield compute to disproportionately overpower open source or less capable “AI lawyers”. Even if they’re run by technical grade operators, by ceding legal knowledge and humans ability to creatively navigate it, we expose ourselves to incredible risk. I believe the same might be true for doctors, especially family practice…. But I see more paths for solutions in medicine.
1
u/mister_hoot 8d ago
nooooooo don't aspire to have a high-paying professional career, you can't do that, AI will take it from you, no one will pay you, don't go to school, nooooooooo-
legal may be in danger at some point, but these guys are far too close to this thing to truly understand just how much the general population leans towards being luddites. no one who is currently over the age of 45 is going to trust a robot doctor at any point in their lives, that experience is too far removed from what they feel is normal. no one above the age of 55 today will hire an AI lawyer.
models reaching proficiency in these disciplines is one thing. marketing and saturation of these models as products and services is an entirely different animal, and i'm very sorry, but guys who have spent their whole careers helping to design and train LLMs have zero professional experience to comment on such things.
364
u/_bold_and_brash 8d ago
Should we just die