r/technology • u/nordineen • 22h ago
Artificial Intelligence 'It's almost tragic': Bubble or not, the AI backlash is validating one critic's warnings
https://fortune.com/2025/08/24/is-ai-a-bubble-market-crash-gary-marcus-openai-gpt5/952
u/nappycatt 21h ago
So much stuff is gonna get clawed back by billionaires when this bubble pops.
607
u/null-character 20h ago
Well billionaires got it right. None of them are using their own money they are using their companies and the US government to invest. That way if/when it shits the bed they can just fire a bunch of people and stop giving raises "due to economic factors" so it doesn't really affect them that much as their stocks will eventually rebound.
139
u/MoffJerjerrod 20h ago
And the billionaires get a wealth tax.
1
76
u/Rebal771 19h ago
Quick question - if all of the low-level people were fired/replaced by AI, who are they going to fire at the time of the pop? 🤔
Just thinking out loud…
14
u/OnionFirm8520 18h ago
There is no evidence that AI is replacing human labor in significant numbers.
"[I]mplementation of generative AI in the workplace was still tentative [in mid-2023]. Only 3.7% of firms reported using AI in September 2023, according to the initial Business Trends and Outlook Survey from the Census Bureau. ChatGPT only hit the public in November 2022.
Adoption has jumped since, but only 9.4% of U.S. businesses nationwide used AI as of July, including machine learning, natural language processing, virtual agents, and voice recognition tasks, according to the census survey. The information sector—which includes technology firms and broadly employs about 2% of U.S. workers—has the highest uptake.
That signals AI could be playing a role in hiring decisions at companies leading the charge in implementing this technological advance, but it accounts for only a small portion of the labor force." Megan Leonhardt for Barron's, August 2025 [https://www.proquest.com/docview/3237960389/fulltext/5E32D2F7F56D4F91PQ/1?accountid=14968&sourcetype=Wire%20Feeds](I accessed here but through my school, not sure if it's available to others to view)
10
u/Salamok 10h ago
There is no evidence that AI is replacing human labor in significant numbers.
I actually agree with this BUT there does seem an awful lot of mass layoffs by CEO's that evangelize AI. They are using it as an excuse to stoke the stock prices while they gut their companies in the hopes to get lean enough to weather the coming economic storm. The work isn't actually being done by AI they are trimming down to skeleton crews and doing very little work at all so they can stockpile cash and ask for large bonuses.
2
u/Sageblue32 4h ago
This. A lot of the work is just being rolled into other employees as they cut down their workforce and increase their bottom line. AI is a great tool helping a lot in industries but at least in it's current form, not near reliable enough to replace entry level positions.
49
u/Rebal771 17h ago
Your link is locked behind a paywall, so I can neither review nor confirm what Megan has claimed.
The timing of your statistics are out of sync with each other, and the “minimization” techniques employed with your statistical review turns a blind eye to the number of layoffs in the tech sector as a whole. (IE: 9.4% of businesses is still a large part of the workforce when Amazon, Nvidia, Dell, and Intel each only count as “one business.”)
As you note, AI adoption has grown since the tools have become more relevant to the jobs…however, they have not necessarily claimed any sort of major improvement. So, jobs are being lost with no provable benefit/efficiency.
I know there is job loss due to AI because I, and many of my colleagues, were some of them. I’ve also read a number of comments in different forums and discussions about different job sectors claiming the same…so I do not believe that these statistics are able to be accounted for until the current generation of human-to-AI transition as completed. I think by April of next year, we will see a much more accurate picture…but IMO, information from 2023 is essentially antiquated in terms of AI development in the workforce.
37
u/20000RadsUnderTheSea 17h ago
I think a combined view of the other person’s “AI adoption rates are low” and your “I and others were fired with a stated or implied reason being we were replaced with AI” is that companies are firing workers and either not replacing them or offshoring their jobs, but claiming AI is replacing them because that plays better with investors and the general public.
Consider: you are in charge of your company’s workforce. You realize you have too many employees for whatever reason, or a project is cancelled, whatever. If you fire people and give an honest reason, it looks like the company made a poor decision, stock and reputation drop. Or you lie and say it’s to replace them with AI. Investors swoon and the general public rolls with it because they’ve been primed to accept this as inevitable.
Or, you’re in charge of workforce and want to offshore for cheaper labor. Same deal, investors might go one way or another, but the general public would hate you for just admitting to offshoring. So you lie, and front that it’s about AI.
My understanding is that the data support this view. We’ve seen increasing offshoring, especially in tech, as well as low adoption rates, and layoffs. I think LLMs being called AI is just an aligned interest where investors want hype and big corps are enjoying using it as a fall guy for unpopular workforce shaping.
2
u/y4udothistome 10h ago
The real change will come win the robots start taking the jobs but I would figure that’s around 2040 In Teslas case 2050
18
u/MoonMaenad 18h ago
I swear what you just said is the reason Trump signed that EO to allow for 401ks to invest in private equity. To further that, I have concerns about shell companies being invested in. I am truly considering pulling my 401k. Billionaires steal my money enough.
5
2
0
47
u/AbleInfluence302 20h ago
In the meantime we can count on more layoffs when the bubble bursts. Even though the whole point of this AI bubble was to replace employees.
167
u/Lucas_OnTop 19h ago
Dont get it twisted, wealth inequality gets worse AFTER the bubble pops because they still have the capital to scoop up cheap assets. A recession isnt an equalizer, this is a call to action.
25
u/stompinstinker 15h ago
Yup. The market will proceed to dump well managed, strong, value stocks too. They are going to pick those up on sale and still be better off.
3
u/AssassinAragorn 7h ago
A more of the time their capital isn't liquid though, it's caught up in the very stocks that are going to crash.
182
u/LurkingTamilian 21h ago
From the article:
“The market can stay solvent longer than you can stay rational,”
Is this a mistake or an intentional rephrasing?
76
u/KontoOficjalneMR 19h ago
Seems like intentional rephrasing. Basically saying that people founding the madness can outspend you, and if you're small investor you must join in on the rally even though you know it's madness.
28
u/aedes 16h ago
This is intentional - think about what it’s saying.
These large companies have tonnes of spare money and capital to burn on supporting AI, even if it ends up being a complete waste. And they can afford to keep burning this money for longer than you can afford to pay attention to reality and bet against them.
38
u/g_smiley 20h ago
I feel it’s mis used from the original Keynes quote.
8
u/LurkingTamilian 20h ago
That's what I thought
15
u/g_smiley 19h ago
It’s the market can stay irrational longer than you can stay solvent. I learned it the hard way early in my career shorting this one stock, can’t even remember which. It was a real stinker but just kept going up.
12
u/wswordsmen 20h ago
That doesn't make sense. Staying rational is free and can't be affected directly by the market. The original quote of "stay solvent" is an explanation about how even if someone finds where the market is being stupid they can't be guaranteed to earn a return from that because the market will put a stress on their financial position eventually making them insolvent.
3
u/Saneless 16h ago
Must have been a mistake. It is now:
The market can stay irrational longer than you can stay solvent
2
u/WeakTransportation37 3h ago edited 3h ago
Wait- are you quoting the article or someone’s comment quoting the article? Bc this is what the article says:
“The market can stay irrational longer than you can stay solvent,”
The article quotes Keynes correctly- where did you get the misquote?
EDIT: sorry. apparently it was initially misquoted, and the article has been edited with no explanatory footnotes. They’re cowards.
3
u/MQ2000 16h ago
I think they edited it, it properly quotes now “The market can stay irrational longer than you can stay solvent”
4
u/LurkingTamilian 15h ago
Now I feel bad for the people in this thread trying to give it the benefit of doubt.
1
u/WeakTransportation37 3h ago
Yeah- I just read the article for the first time 12hrs later and thought there was a collective misreading or something. Aren’t they supposed to footnote or preface the article with any edits?
1.2k
u/eleven-fu 21h ago edited 21h ago
LLMs are only capable of solving 'needle in the haystack' type problems, and they can only do this reliably if we already have clear, immutable definitions of what the hay and the needles are and only if the haystack contains nothing but hay and needles.
259
u/Buckeye_Monkey 20h ago
Thank you for this. I'm stealing this to explain data theory to people when they ask for vague system reporting. LOL.
110
u/Sptsjunkie 20h ago
Two things can both be true. Unlike crypto or especially NFTs there are far more use cases for AI and it is probably going to be significantly more relevant in the long-term.
And much like the 2001 .COM bubble crash there’s been a ton of money thrown at bad AI investments and crockpots who attach the term AI to very poor technology that is going to burst and cost people a lot of money
94
u/eleven-fu 20h ago
I'm actually arguing that LLMs are very powerful, in limited use-cases.
32
u/Amazing-Treat-8706 15h ago
Part of the issue currently is that many, many people conflate LLMs with AI. Meanwhile LLMs are just one iteration and type of “AI”. I’ve been implementing various AI/ML solutions for about 10 years now professionally. LLMs are interesting and useful in a lot of ways but they are just one of many tools in the AI toolbox. And there’s no reason to think LLMs are the peak / end of AI either. They are clearly very limited in many ways.
31
u/drizzes 15h ago
Doesn't help that these guys are selling LLM as essentially fully autonomous AI that will solve all your problems.
-5
u/scopa0304 11h ago
Well, “Agentic” applications of LLMs are fully autonomous as far as most consumers are concerned. It’s a thing that can go out and do tasks for you and then report back using natural language.
2
u/Sptsjunkie 9h ago
I agree and to be fair I think those used cases will expand over time. Like I was saying, I think that there are really used cases for AI where it can create value and help people.
Which is something I have not seen with other trends. Crypto has some very narrow use cases like the black market and countries with very high instability in their currency. And NFT has had virtually no use cases. I think that AI has economic use cases, but like a lot of hot trends or something where a lot of money got thrown at it before it has really caught up to the value it can eventually create.
8
u/IAMA_Plumber-AMA 18h ago
Like protein folding.
34
u/eleven-fu 18h ago
Yeah, Cosmology, Astrophysics...
Stuff we have huge, high quality datasets on, where the main obstacle is not enough eyes.
9
u/YondaimeHokage4 16h ago
Lots of medical applications for exactly this. Some really cool stuff is already happening with AI in medical research and care.
3
u/urbansasquatchNC 7h ago
I think the main issue is that a lay person hears "AI" can help identify cancer, and they think that it's the LLM kind and not a specific image recognition ML program that was trained to identify pastries but is somehow also good at IDing cancer.
12
u/20000RadsUnderTheSea 17h ago
For what it’s worth, Alphafold 2 was decent at single protein folding models, but dropped to ~50% accuracy when trying to model the interactions between two proteins. Alphafold 3 might have improved a bit, but it’s still not really reliable AFAIK.
The 50% figure comes from a validation study a lab at my university did, I’m not sure if they published the work yet because it only wrapped up a few months ago. They were comparing Alphafold predictions to x-ray crystallography data for proteins.
1
u/-LsDmThC- 15h ago
Probably has a lot to do with a lack of multi-protein complexes in the training data.
2
u/Junior-Ad2207 13h ago
Sure but what is important is if LLMs can be used to reduce the workforce expenses in order to increase profits. That's the only thing that matters.
-27
u/red75prime 19h ago edited 18h ago
What is the basis of your argument? An LLM is a general learning machine. Autoregressive training regime seems to have its limitations, but there are other training regimes. Finding needles is just one kind of functionality LLMs can learn.
ETA: -18 points and not a word in response. /r/technology at its finest, bwahaha.
9
u/Sosolidclaws 17h ago
Can you give an example of an LLM coming up with a truly novel solution / R&D other than hyper-specific cases where we have massive amounts of data and are indeed looking for a “needle in a haysack”?
5
2
u/red75prime 2h ago
An addition. We are at "it's debatable" stage. For example: https://x.com/SebastienBubeck/status/1958198661139009862
Is it a truly novel solution, or rehashing of known techniques, or OAI employee making stuff up, or "it's not a real proof, it's a generated text that accidentally happened to be a proof!!!111"?
2
u/red75prime 15h ago
"Truly novel" is not that easy to define. But, no, I don't think that the current generation of LLMs (and large multimodal models) are there.
But it wasn't the brunt of my argument. I was arguing against "finding a needle in a haystack is all LLMs can do".
There's no established theoretical reasons for that.
0
u/Sosolidclaws 13h ago
Yeah, I guess theoretically not. But we’re also seeing their limitations (content generation + basic automation / finding needles), and that doesn’t seem to be solved just by scaling model size.
2
u/red75prime 12h ago
Probably yes, but there are many things going on besides scaling. Variations on mixture of experts. Self-reflection enhancing reinforcement learning. Attempts to introduce episodic memory.
Things are evolving. But, I guess, it will take time for promising approaches to percolate into user-facing models. Scaling of existing models guarantees gains (even if the gains aren't as big as expected). Bringing something new to industrial scale is more risky.
7
u/ABadLocalCommercial 17h ago
An LLM is a general learning machine.
No it's not. Full stop.
They don’t just keep learning new stuff without retaining the model. They’re just giant pattern predictors trained to guess the next token. That’s not the same or even in the same realm of conversation as general-purpose learning or AGI.
Autoregressive training regime seems to have its limitations, but there are other training regimes.
Autoregression is the whole reason these models work. There are many ways to improve the "limitations" like stacking RLHF, retrieval, or fine-tuning on top, but those are tweaks on the same foundation, not totally new training regimes.
Finding needles is just one kind of functionality LLMs can learn.
Framing it as learning is misleading at best. They don’t “learn” anything after training, in-so-far as you consider model training "learning". Any new functionality or tools (file retrieval, Web search, etc.) that get added have to be developed and added in so the LLM actually knows how and when to use them.
So yeah, they’re powerful, no one's denying that, but they're still very contained. The way you're talking about them is how you end up with hype train takes that make people think GPT is two papers away from curing cancer.
Enjoy them for what they are, good text generators that sometimes spot patterns better than people.
-5
u/red75prime 15h ago edited 14h ago
No it's not. Full stop.
The. Universal. Approximation. Theorem. (I have more full stops, hehe)
Read it.
not totally new training regimes
You forgot reinforcement learning from verifiable rewards. It shifts the learned probability distribution from mimicking training data to getting the results.
that get added have to be developed and added in so the LLM actually knows how and when to use them.
The current generation of LLMs needs external training. OK, but prove that learning to learn can't be learned (using the appropriate tools of course).
6
3
u/PuckSenior 9h ago
No. People don’t really understand the .com bubble. They think it was caused by pets.com or something going bankrupt. But it wasn’t. Nearly all of the money invested in IPO websites was fine as it was highly speculative and people appropriately understood the risk.
What actually caused the crash was infrastructure, specifically fiber. Several companies started spending massive amounts of cash to build out fiber, expecting a somewhat linear growth in the fiber market. But fiber doesn’t work that way. Its bandwidth is limited by the transmitter/receiver more than anything else. There were several technology upgrades that increased capacity. Additionally, too many companies were laying too much fiber because they weren’t properly looking at the market as a whole. That is why we STILL have dark fiber. All of the extra fiber laid
The same thing is happening with AI. Companies you’ve never heard of are building out a bunch of data centers to co-locate LLM processing. But if someone optimizes the LLM or the market crashes, those companies are going to have a lot of server space and no customers. They will go bankrupt and I don’t know that people have properly analyzed this risk. It’s also diverse as there are property companies, maintenance, etc that are all supporting these huge facilities that will go bankrupt if they lose their customers
2
u/Huge-Possibility1065 5h ago
Indeed and this is exactly the bubble that nvidia is riding right now
these people are ignoring grid capacity to do this
2
u/PuckSenior 5h ago
When we start signing deals for new generation plans that collapse, it’s gonna be bad
1
u/Huge-Possibility1065 3h ago
its a shame that the idea of planning and modelling of needs vs capacity doesn't get much of a look in vs trreating everything as a form of gambling
-22
u/hopelesslysarcastic 19h ago
The fact anyone even thinks this form of “AI” can be remotely comparable to crypto or NFTs regarding functionality utility…just shows how tech illiterate this technology sub is.
It’s not God, but it’s easily the most transformative piece of technology created in a longgggg time.
People genuinely don’t understand that “AI” is an umbrella term of different technologies.
Traditional machine learning, and even before then, symbolic learning, are all forms of “AI”.
But they were narrow applications of it.
There was no such thing as a “general purpose model”.
That was not a thing before LLMs
There’s never been anything remotely close to a general purpose model before LLMs.
0
u/wrosecrans 11h ago
I think "AI in the long term" is uncontroversial to say it will be useful. It's "Generative AI systems as they exist in 2025" that has been horrifically overhyped and desperately need to be reigned it.
-5
u/Guinness 8h ago
I think LLMs are huge. They’re just not AI. I can’t think of another tool I got so excited for. A tool that I am saving up to buy a bunch of video cards to use. Linux maybe? I built out a bunch of computers for my Linux projects. And now I’m building out GPU computers for LLM projects.
Crypto was ok, I enjoyed it from the perspective that I could make good money and it again involved Linux. But crypto didn’t captivate me like LLMs do. I have so many ideas I want to use them for and not enough time.
That frustration of wanting to tinker with them all day long is to me, a sign that there is something huge there. I haven’t felt this since I first dove into Linux.
And Linux ended up eating damn near everything.
74
18
u/mach8mc 19h ago
what if there's more than hay and needles, can we prevent hallucination?
52
u/eleven-fu 19h ago
I'm not sure. Programming them to not return an answer at any cost would probably be a good starting point, though.
5
u/Susan-stoHelit 11h ago
But they aren’t built like that. You can reduce it some, but hallucination is built into the algorithm. It can’t tell the difference between truth and a hallucination.
10
u/SpicaGenovese 19h ago
Depends on the context.
I have a use case where I can easily validate for hallucinations, so I do. (I'm asking the model to choose a set of words from a text and return them as a comma separated list.)
3
u/Susan-stoHelit 11h ago
Seems that could be done by a dozen tools that don’t hallucinate and are faster.
1
-20
u/moustacheption 18h ago
Hallucinations is a made up word for software bugs. They’re bugs. AI is software. AI is a buggy mess
11
u/ceciltech 17h ago
But it isn’t a bug, that simply is not true. It is the nature of the way they work.
-10
u/moustacheption 16h ago
i need to try that one next time a bug ticket gets opened on a feature i write. "That isn't a bug, that simply is not true. it is the nature of how it works."
6
u/ceciltech 15h ago
LLMs hallucinate because they are designed to predict the next most probable word, filling in gaps with plausible but often incorrect information, rather than accessing or verifying facts. This behavior is less of a bug and more of an inherent "feature" of their probabilistic nature, making them creative but also prone to generating false or fabricated content with high confidence. Causes include limited real-time knowledge, gaps in training data, ambiguous prompts, and a lack of internal uncertainty monitoring.
This explanation was supplied by google AI, AI know thyself.
2
u/Susan-stoHelit 11h ago
They’re right, you’re wrong. This is how LLMs work. It’s not a bug, it’s the core algorithm.
0
u/moustacheption 10h ago
i mean they're not, they are indeed bugs... and you can re-word it as much as you like, but they're still fundamentally software bugs.
4
u/Neat_Issue8569 15h ago
They aren't bugs at all, granted the term "hallucinate" implies a level of anthropomorphism that shouldn't be here, but putting aside the semantics, a "hallucination" isn't a bug. LLMs are autoregressive statistical models for token prediction, static in design and probabilistically weighted according to the abstracted syntactic relationships of the training dataset.
What this means is the LLM doesn't have a concept of truth or a concept of anything at all. It's just pushing out the most likely word to follow another string of words based upon the statistical probability observed in the training dataset. The result is a stochastic parrot that can say literally anything with the appearance of confidence, and because humans are lazy and like to anthropomorphise these bloated parrots, we use faulty terms like "hallucinate" when in reality there's no actual measurable difference to the LLM between what we consider a correct answer and an incorrect answer. Sure, WE can verify a claim made by an LLM by applying logic, reasoning, critical thinking skills, but the LLM can't, so in terms of what could be a measured variable tracking the "truth" as the LLM puts out obviously false statements, the answer is nothing.
18
u/Shachar2like 20h ago
I use it more as a 'free type search engine' when I don't know how to phrase the question, but I don't consider it's answers as trustworthy.
What would it do in this case? Simply say what most people will say?
7
u/Cheapskate-DM 18h ago
Using vague conversational inputs for discreet commands or searches for verified information would be great. Unfortunately, it's gonna take a long time to filter out the digital asbestos created by these clumsy generative models.
0
u/Shachar2like 14h ago
Here's something I've heard others doing: For example parents asking the AI for advice on managing their child in specific situation.
They say that the results were good. Again I'm assuming it picks the "strongest signal" based on the internet/his database. So it wouldn't be ground breaking.
What about that?
3
u/Cheapskate-DM 14h ago
I'm thinking more professional settings.
Say you have a 1000-page technical manual for, I dunno, CNC five-axis machining. This is a device that can and will kill itself if you tell it to - so instead of being able to tell it to, the only AI function is a text parser for any good ol' boy to ask it where to look in the manual for information on this specific problem.
0
u/Shachar2like 14h ago
The US government used it for that and it still made mistakes. The tool is untrustworthy.
2
u/ciprian1564 13h ago
I use it as an aggrigator. So like for helping diagnose issues with my wife's health we put everything into gemini and then took its advice in the short term until we could go see our family doctor and presented him with everything that we saw and what gemini spit out. These did help because they helped us identify that the issue was something we otherwise wouldn't have expected from a Google search. I find ai a good starting point so long as its not a be all end all
3
u/sonofchocula 15h ago
There are a ton of non-sensational and reasonably efficient uses for LLMs. The same people making blanket statements like this seem to just be using commercial chat platforms and not building or solving anything of consequence.
4
1
1
u/Eggonioni 14h ago
You should also point out that the haystack is full of partial hay and partial needles too.
1
-5
u/thegnome54 16h ago
This is absolutely not my experience. LLMs are incredible for helping you move into new spaces of inquiry and learn skills. They can give you a personalized overview of how to approach a problem, suggest the kind of language to use in traditional searches, and are excellent at completeness checks (“anything else I should be considering?”)
I use LLMs daily and they have supercharged my creative processes.
-69
u/eras 21h ago
What do you mean by this?
LLMs seem to be also quite able to apply the needle it finds to your particular use case, such as in the case of software development the programming language, the data structures being used, variable names, general coding conventions, ..
Which is great because in software development not every day we solve novel problems. Instead, we solve tiny already-solved problems a lot of times, and sometimes this, as a whole, might create a solution to a novel problem. LLMs are pretty effective in finding solutions to those tiny already-solved subproblems.
Quite likely similar situations can found in other domains as well.
48
u/SparkyPantsMcGee 21h ago
You’re quite literally illustrating his point.
-8
20h ago
[deleted]
20
u/I_Think_It_Would_Be 20h ago edited 20h ago
He's not saying LLMs have problems finding solutions for programming problems. He's saying that finding a solution (the needle) to a programming problem (a clearly defined space in the haystack) is what LLMs are capable of.
You're not refuting anything. You're not adding anything. The exchange you started basically looks like this to us:
Him: "Fishing rods are good at catching fish if there's fish in the water."
You: "What do you mean? When I use my fishing rod to catch fish in waters with plenty of fish, it works really well!"
Yeah, no shit.
ps.:
What's the last problem you've run into in programming, that an LLM couldn't handle?
Any problem that requires a very large context window, because the ability to find the proper solution degrades with it. Large code bases with dependencies, a lot of accounting for edge cases, multi-step processes, issues that show up in problem space X but are actually created in problem space Y etc.
There are lots of programming tasks LLMs don't excel at.
3
u/MrPoon 19h ago
I am an active researcher in reinforcement learning, and LLMs can't do shit. Worse actually, they produce functional code that does the wrong thing, which could easily fool a non-expert.
1
u/I_Think_It_Would_Be 19h ago
I am an active researcher in reinforcement learning, and LLMs can't do shit.
I mean, that's too hard in the other direction. I've seen LLMs do things, useful things, but they're a tool that's easily misused and because it always produces an output it can seem competent even when it's not (like you said). It takes somebody with real knowledge to use it properly.
25
-38
u/generalright 20h ago
What a lazy and boring definition. You can do way more than ask it to solve problems.
26
u/eleven-fu 20h ago
I think that your definition of 'problems' is too narrow.
-32
u/generalright 20h ago
Take for example creating charts, graphs or newsletters. Asking it to do a math problem. Or in the next few years, having it produce a movement action. It’s not just about LLMs. People are so quick to act like “they told us so” about new technology they barely understand.
14
u/Gingingin100 20h ago
Three of those are literally writing code what are you talking about
-18
u/generalright 20h ago
Not everyone writes code buddy, regular people can use AI to do that
8
u/Gingingin100 20h ago
Okay, to repeat so you can understand
Charts ->LLM is writing code
Graphs ->LLM is writing code
Maths ->LLM is writing code
That cleared up for you?
3/4 of the things you mentioned are infact, the same problem
-4
u/generalright 20h ago
Oh yeah? And everything I do in life is just neurons and synapses firing. See how easy it is to not prove a point by reducing actions to their fundamental building blocks. Sybau.
8
u/Gingingin100 20h ago
You literally responded to someone by saying that those are unique problems when they're not.
You quite literally just chose the worst possible things to use as examples. They're examples of the bot writing code then passing them to a graphics library
Why not choose more things similar to actual word composition? Why did you just choose 3 examples of the same thing?
Sybau
Ooh we got a spicy one who can't swear at me in full words🥹
0
u/generalright 20h ago
Because it’s not the code that is important, it’s the fact that it is solving my HUMAN PROBLEM by saving me time and effort. I could care less if it’s solving your coders definition of a problem. AI saves me time. That’s why it’s useful.
→ More replies (0)-4
-2
15
49
u/Sorry_End3401 19h ago
They already won by selling venture capitalists on AI theory that is basically a playbook from Musk. Over promise-under deliver. Just hype hype hype with bad results in a few years. The money is gone and the public is the bag holder
44
u/BroForceOne 17h ago
It’s almost tragic
Is it though? It makes me optimistic for the future that humanity is pushing back against the current tech billionaire manipulative wealth transfer plot considering how badly we fell for their last one with social media.
17
u/Noblesseux 13h ago
I think it's less that humanity is "pushing back" and more that these people are stupid and don't know how to run businesses. The public just kind of watched them do all this nonsense and in some cases straight up participated by shouting down the people who said that a lot of these promises made no sense and didn't reflect reality.
The entire tech industry for the last decade or so has been a constant cycle of booms and busts based on products that barely make any sense. Uber's business plan made no sense. OpenAI's business plan makes no sense. The whole stated promise of NFTs and cryptocurrencies as anything other than gambling makes no sense. Hyperloop made 0 sense. Tesla's valuation still makes no sense. I'd go as far even as saying that self driving cars as a mass product make no sense.
But we've been in this era where these people never have to actually justify WHY people should be giving them billions when they have no long term sustainable plans other than vague promises that everything will work out somehow based on some idea they ripped off a movie. Like we collectively will ignore actual engineers and people with logistics backgrounds to listen to what a drop out who just happened to become a CEO has to say.
9
u/designthrowaway7429 16h ago
I like this take. I’ve been feeling similarly lately, the growing backlash is giving me some hope.
8
u/BeneficialNatural610 12h ago
Perhaps the CEOs shouldn't have laid everyone off and barked on about how disposable we are.
13
u/Informal-Armadillo 21h ago
I believe there’s a distinction between knowing the solution to a known problem and applying it correctly in various situations. While solving all the core problems is one thing, attempting to apply them in complex existing codebases without refactoring the entire code base is where the LLM lack, this is not an insurmountable problem but a problem that is big enough to be a large obstacle to its overall all use. This does not make LLM/ML useless it makes for us finding ways to improve our flows developer(user) to LLM.
3
u/GoochLord2217 6h ago
I am all for AI to a certain extent. What I really would like to see gone is the AI imagery industry. A lot of harm is coming out of it even right now, back when you could easily tell it was fake, shit was kinda funny, but people are falling for it now, especially the elderly, who are more swayable to things like scams.
7
u/SheetzoosOfficial 9h ago
Want a free and easy way to farm karma?
Just post an article to r/technology that says: AI BAD!1!
7
u/ShadowbanRevival 17h ago
Lmfao this guy said that llms would never be able to get a silver in the math olympiad and literally THE NEXT DAY Google and open ai for gold. This dude has been wrong so many times, he has to be contrarian or he has nothing else
4
u/ghoztfrog 13h ago
If you can take the same test concurrently a million times is your best result even valuable?
1
u/HertzaHaeon 1h ago
It seems to me he's been more right than wrong.
Do you judge Sam Altman and other AI shamans by the same standards? They've been plenty wrong too.
Only one party are asking for trillions and ruining society and the planet to get it.
5
u/sobe86 18h ago edited 17h ago
So this article is singing the praises of Gary Marcus. As someone who used to be a fan of his, let me give an alternative perspective.
Gary Marcus strongly believes in "symbolic" approaches to AI, and LLMs are in some ways the antithesis to this. Gary (along with Noam Chomsky), has been one of the most vocal skeptics of the LLM / scaling approach for the last decade or so. The problem is, basically all of their predictions along the lines of "LLMs will never be able to do xyz, because you need symbolic AI for that" have basically been proven wrong. He has never admitted this, and instead of doing what a good scientist would do, he has (IMO) absolutely doubled and tripled down on this idea that symbolic AI is what should be pursued, and never adjusts his confidence even an iota that he could be wrong. I reckon if all possible signs were pointing at AGI being 6 months away, Gary Marcus would be writing articles saying that AGI is still 50 years away. For this reason I think he's not a person worth listening to, he's basically a stopped watch on this topic. He will be nay-saying all aspects of current AI approaches regardless of what is happening in reality.
1
u/HertzaHaeon 1h ago
From what I've seen, little of Marcus' criticism of LLMs is based on symbolic AI being better. Most of his criticism is from what I can see independent of whatever will bring us the
second thirdfourth AI rapture.Marcus isn't trying to sell me trillions of dollars of overhyped LLM farms ruining the planet and society. Not yet, anyway. Some of his criticism is dubious or wrong, sure, but considering the other side's AGI soon hype I can deal with Marcus misses while reading his hits, because it's sorely needed criticism and skepticism that few others seem to be engaging in.
-6
u/creaturefeature16 16h ago
What a weird way of saying he's been right all along and continues to be.
6
u/sobe86 16h ago
He has been demonstrably wrong many times now about "limitations" of what LLMs would be able to do. He did not adjust his stance at all based on that. He's a useless commentator in my opinion because he is purely ideological on it. No matter what happens he has already made his mind up and will not reassess.
2
2
2
1
u/Southern_Wall1103 12h ago
All the companies that lost all the know-how and wisdom to Gen Ai exuberance ….
What are they teaching in CEO school anyway ? Didn’t they learn from 2008-09 how GM “saved” lots of money by laying everyone off… And when market turned they had no one to design and build cars ?
1
u/barf_the_mog 7h ago
I’ll believe in AI when I get good movie and music suggestions… as of right now it’s pretty useless other than boilerplate.
1
u/GabeDef 6h ago
Not sure I understand how this bubble bursts. If the goal is to automate everything, that will take years, and years will require hardware upgrades as they go. Seems more like a giant endless cycle.
1
u/DanielPhermous 6h ago
Bubbles are nothing to do with the technology or its application. It's to do with the level of investment and return. Right now, LLMs are not profitable and vast amounts of investment money is being piled on to cover the shortfall. That's unsustainable.
1
u/NearsightedNomad 3h ago
By early August, Axios had identified the slang “clunker” being applied widely to AI mishaps
Now that’s just lazy reporting right there…
0
u/BrowniesWithAlmonds 1h ago
What backlash? AI is everywhere and is easier and easier to get connected to it. There’s no backlash, just like the internet — it is here to stay and will continue to evolve. 20 yrs from now it’s going to be as normal as breathing.
1
u/MightB2rue 13h ago
This guy has been saying the same thing since 2012. Maybe he's right in 2025, but if you made any portfolio decisions based on his "warnings" in the last 13 years, then your portfolio missed out on some major returns.
From the article:
"So if Marcus is correct, why haven’t people been listening to him for years? He said he’s been warning people about this for years, too, calling it the “gullibility gap” in his 2019 book Rebooting AI and arguing in The New Yorker in 2012 that deep learning was a ladder that wouldn’t reach the moon."
1
0
-14
163
u/TheMatt561 19h ago edited 8h ago
Even if the bubble bursts in terms of large companies using i,t the cats out of the bag on scammers