r/technology 21h ago

Artificial Intelligence Why AI Experts Say We Need a Radical Rethink of the Technology

https://www.inc.com/rebecca-hinds/why-ai-experts-say-we-need-a-radical-rethink-of-the-technology/91229674
38 Upvotes

72 comments sorted by

83

u/WCland 20h ago

In an NYT oped, the authors pointed out how US companies are betting everything on the magic bullet of AGI, while in China they are deploying very application specific AI. The results seem to be that we’re getting low ROI, while Chinese companies are seeing real productivity gains.

59

u/Elongatingpolymerase 19h ago

which seems pretty fucking obvious. AGI would be essentially a technological brain capable of learning and thinking for itself. That's a massive technological leap as opposed to application specific models. But techbros are raising billions off of their grift.

18

u/ohnofluffy 17h ago

Am I wrong or is Altman’s entire pitch that this is solely a power problem? What first day in CompSci nonsense are people investing in? To the point where it’s fucked the market…

12

u/Elongatingpolymerase 13h ago edited 3h ago

Sure, I guess Altman may be correct in that infiinite power would let a computer brute force anything, but that is just an insanely dumb point and a sign that they dont know how to make an efficient AGI system.

2

u/MrPloppyHead 4h ago

Although, they have stoked an awful lot of data under it’s guise

They have managed to install, what would normally be described as, malware on millions of devices scraping data.

2

u/blipblapblopblam 3h ago

I've got an AGI that runs on 20 watts of power. Takes a bit of time to train, but then all good. Needs 8 hrs approx to rebalance weights each night. Anyone interested? I'll take offers over a billion.

2

u/The_Value_Hound 2h ago

Slavery was outlawed dude.

8

u/Lews_There_In 14h ago

To the point where it's propping up the US economy, and becoming a bubble on par with the .com bubble.

11

u/QuickQuirk 14h ago

And nvidia is giving itself a market cap of trillions by selling the GPUs required to train these massive, expensive models rather than the small application specific ones which can often be trained on your laptop

It's a deliberate venture pushed grift. It should be no surprise to discover exactly how many billions in shares NVidia senior leadership has sold in the past year,

3

u/el_muchacho 6h ago edited 6h ago

More exactly, instead of spending trillions in hardware trying to brute force the problem with no hope of solving it, they should fund hundreds of mllions in actual research involving researchers and mathematicians. Aka brainpower instead of hardware. This would be a far more effective use of money. The only problem with that is research doesn't make your stock value rise, which is ultimately the goal of this grift.

2

u/QuickQuirk 6h ago

absolutely. A frustration of mine: So many brilliant ideas by small scale researchers are not being pursued or funded, because everyone is all in on LLMs: More specifically, LLMS cosplaying as AGI.

4

u/Throwaway1285837 9h ago

I hate how we have a mythological reverence for AGI

1

u/Elongatingpolymerase 8h ago

Intelligent people don't, greedy tech bros grifting off dumb fuck CEOs love it because they are selling an idea to people too dumb to understand how absurd of an idea it truly is in the near term. Frankly, what would AGI gain us as a species? if it's smarter than us and self replicating it would quickly realize a lot of us are a fucking cancer on the world to be eradicated. If dumber than us, then it doesn't really help and a specific task-oriented AI model would be easier to make and more efficient.

6

u/Vellanne_ 16h ago

The technology can't even 'learn' to count how many R's are in strawberry. They have to manually patch the system so it doesn't appear to be a complete failure.

4

u/Elongatingpolymerase 15h ago

Yes, the worst part is the models will quite confidently tell you total bullshit and a lot of people don't care to fact check.

-11

u/two_hyun 19h ago

??? We are seeing application specific models in the US. But AGI is the holy grail - how I see it is US is leading the world in research for AGI, not "betting everything on the magic bullet".

3

u/radioactivecat 16h ago

Except no we’re not. We’re leading the market in BS about LLMs which will certainly not lead to AGI.

3

u/socoolandawesome 20h ago edited 19h ago

Do you have any examples of this? Because they have generalized models in china. They also don’t have access to the best NVIDIA chips making it harder to scale intelligence.

15

u/WCland 19h ago

7

u/socoolandawesome 19h ago

I appreciate the gifted link.

I’d argue that the author of that piece gives a pretty weak argument on china vs US priorities/integration and each country’s people’s respective views on AI. Plenty of US companies are pushing current AI and trying to integrate it into our daily lives and are not ignoring current model’s applications in favor of AGI, they are doing both.

China is hindered in their scaling due to tariffs but they still prioritize this, and I can recall their leaders speaking of importance of AGI, for instance deepseek’s CEO has said their primary goal is AGI and it’ll happen in 2-10 years. They actively try to smuggle more NVIDIA chips into the country to scale toward better intelligence.

The author brings up in his argument the medical diagnostic ability of Alibaba’s app, but the medical capabilities of GPT-5 were one of its main selling points. AI also is being integrated directly into medical practices in the US as well which the article said china is doing.

The author also talks about AI competitions with rural farmers being a big part of why Chinese like AI. Seems like a weak point on its own, but I can’t imagine that US farmers would have the same mentality toward wanting to participate in tech competitions. AI is however also being integrated into farming here too.

I think there is a huge difference in mentality between Americans and Chinese towards tech/AI, but I’m not sure it has much to do with any of what the author mentioned in terms of how it’s being pushed in each respective country. They seem to embrace tech/technological progress over there more than we do, and a lot of Americans seem to have a disliking/distrust toward tech and Silicon Valley. Also if government is pushing something over there, I’m not sure it’s wise to have dissenting opinions/push back on that.

They also do have the advantage of central planning that we don’t for pushing AI initiatives. But I definitely don’t agree we are ignoring current models application in favor of AGI.

-7

u/TheGoldenCompany_ 15h ago

Lmao what are you smoking. Companies all over are embracing AI. People may not like it but it’s happening.

5

u/socoolandawesome 15h ago

You sure you actually read my comment?

7

u/kainzilla 11h ago

“Generalized models” is not AGI. AGI means “actually thinks” not “fancy autocomplete”

0

u/socoolandawesome 11h ago

I said generalized cuz he said application specific AI aka specific use models, nothing to do with AGI

Separately AGI does not mean “actually thinks”. AGI means artificial general intelligence. A lot of people’s definition for this, including mine is able to perform all intellectual and computer based tasks an expert level human can. However they do it including if they “actually think” (whatever that means) is irrelevant

1

u/Moist-Operation1592 13h ago

many small models and systems as specialists vs a bunch of generalists hmmmm

1

u/SemanticSynapse 16h ago

I'm of the mind set that for many applications we can scaffold the general AI to have it perform as a specialist, but we do need to start approaching these as systems that work together, rather than this all around, context shared interface.

61

u/BobbaBlep 21h ago

Why? Because everyone is catching on that it's useless hype being crammed ferociously down our throats?

25

u/DarthBuzzard 20h ago

Generative AI is very overhyped, but it's not useless. It has genuine usecases.

9

u/Optimoprimo 19h ago

Yeah, the problem is capitalism pressures new innovations to be rushed and overpromised for quick profit.

3

u/Fuddle 14h ago

This is pets.com all over again

https://en.m.wikipedia.org/wiki/Pets.com

The internet was predicted to be the future back in the 90s, but a TON of companies just slapped .com onto shitty business ideas and got millions in VC money, and even had an IPO. Pets.com was just the poster child as to how crazy this all was

8

u/Zookeeper187 20h ago

Sure. Even if it delivers 20% of what it promises is a miracle at this point tho.

3

u/subcide 14h ago

The problem is if it delivers 20% of what it promises, there's no way investors will recoup their investment capital.

I mean it's not a problem to me, but it is to them.

2

u/krileon 20h ago

I agree, but those usecases you can count on probably one hand. Certainly not so many that it can replace giant swaths of the work force, lol.

6

u/Orionite 19h ago

You need to look at this whole “replacing workforce” thing differently. It’s not so much that an entire position will now be filled by AI, but the professionals who are already there can now be a lot more efficient and productive. So you need fewer of them.

12

u/treemeizer 18h ago

My experience is that it has the opposite effect with a majority of those using it.

This relates to the world of research and problem solving. Over the course of a year, collegues who would do some legwork in solving a problem, now simply ask ChatGPT, and copy the output to me, or to our departmental group, and the result is a lot of wasted time - I must now break focus to read and refute bullshit, finding myself arguing against a unique form of confident ignorance. "Have you tried this...", followed by twelve pages of emoji bullet pointed lists rife with fundamentally wrong or contextually irrelevant fantasies.

It has been extremely demoralizing, to say the least. Especially since these same habits make the offenders immune to actually learning the core principles necessary to solve said problems. Then when I actually solve it, they come away with more false confidence.

My boss just last week said, "It's getting better, I rarely see hallucinations anymore," to which I had to remind him the day before ChatGPT made him think port 80 was used for HTTPS...something he wouldn't let go of until I had to fucking find reference material refuting.

If ChatGPT told these people screw drivers were the preferred tool for hammering nails, they'd ask me where one finds threadless Phillips head nails.

-2

u/Orionite 18h ago

Sad Haha. In Fairness that’s as much a people problem as it one of technology

2

u/Ok_Value5495 17h ago

Except who are often considered the less efficient and productive employees for no real fault of their own? Junior employees that need to replace the ranks of senior ones down the line. Of course, the C-suite only sees a drop of labor costs and gets rewarded for this short-term thinking.

5

u/socoolandawesome 19h ago

Maybe Reddit believes this, but their userbases and revenue continue to explode

-6

u/two_hyun 19h ago

The owner of Labubu made hundreds of millions.

3

u/nicuramar 20h ago

It’s definitely not useless at all. 

1

u/btoned 17h ago

YOU SHUT YOUR MOUTH AND KEEP BUYING QQQ

5

u/Southern_Wall1103 18h ago

Because unrealistic expectations were allowed to flow down from CEOs ?

First mistake was marketing this as ANY kind of “intelligence”… Should have stayed with calling it a Large Language Model, that’s all it does.. It does not know context or spatial reasoning.. C-suites didn’t bother knowing the use-case specific caveats… Not bad for coding or summarizing, but lousy when it really really matters. 

Good luck to all highly invested in Ai, even MIT has study showing the Gen Ai flops in Corp America. And now Meta pausing hiring after blowing so much on recruiting talent. 

5

u/NorthernCobraChicken 17h ago

Everyone jumped on the bandwagon way too fast.

This is a clear example of putting the cart before the horse.

There are some interesting things being done with it for sure, but in the age of "everything right now" too many people wanted it to be the golden egg, when really openai was a pigeon in a goose disguise.

Especially in the coding space.

There was such a huge influx of vibe coded applications that they flooded the learning grounds that the same tools that generated the code use to train, so now, because of incompetent idiots who don't actually know how to develop an app properly, it's garbage in garbage out because the AI is recycling its own insecure and often deprecated code.

3

u/NanditoPapa 11h ago

I kinda feel we should start boycotting any article that uses the term “godfather of AI” as being unserious.

26

u/Prior_Coyote_4376 21h ago edited 21h ago

I’ve been screaming this for years and there are very angry reddit threads to that effect here.

I feel like I contributed to helping nuclear weapons right before the Nazis took over. Tech gave the illusion that they were going to be a diverse type of capitalism that would steadily fix itself overtime.

Turns out, it’s always been bros who never got laid. Grok is just automated 4chan posting. They’re selling snake oil and the few real ones get weeded out so that Elon and Mark rise to the top.

No one even needs these YC-types to make money. They celebrate their own failures while firing you without notice. If you’ve seen Gavin Belson, you’ve seen them all.

Hinton reached for a striking analogy: the only case of a less-intelligent being controlling a more-intelligent one is a baby controlling its mother, thanks to deep-rooted maternal instincts.

This isn’t some radical breakthrough. This is just weird. To even frame this as control is a complex discussion about intelligence we are not resolving in the next few decades just because we can do math, like, faster.

6

u/CondiMesmer 14h ago

I feel like I contributed to helping nuclear weapons right before the Nazis took over.

Bro have you not seen the colossal failure of ChatGPT 5's launch? AI has already stalled. It's not getting much better right now. It was never intelligent, and all the science fiction braindead predictions relied on it being conscious, which it's not. LLMs are auto-completes, that's how text generation works. AI Agents have a 95% failure rate and are crazy expensive to run.

I cannot reiterate this enough, AI has no intelligence. That's just a fact, it's not how LLMs work. We do not even have a hypothetical path towards AGI. We've already hit a wall. AGI requires intelligence and we don't even have any concept of creating actual artificial "intelligence". It's like warning about flying cars when we haven't even discovered the wheel yet.

1

u/Moth_LovesLamp 20h ago

I feel like I contributed to helping nuclear weapons right before the Nazis took over. Tech gave the illusion that they were going to be a diverse type of capitalism that would steadily fix itself overtime.

We are putting all the fuel, resources and engines into the Space X Rocket and aiming for Alpha Centauri not considering it could even blow up in our faces.

I feel dumb for being Pro AI for so long.

5

u/Prior_Coyote_4376 20h ago

I really do wish I wasn’t as harsh on the people who were just telling me I was the Farnsworth meme, explaining how the Death Ray is a neutral tool because it depends how you use it and science doesn’t like value-based/normative claims.

Fuck, yeah that wasn’t a good one. My bad guys.

3

u/Ragnagord 20h ago edited 20h ago

Eh, as a principle I agree with that. 

It's just that putting a death ray in the hands of profit maximizing venture capitalists is a bad idea, actually.

-1

u/xynix_ie 20h ago

Thats ok. I sell AI infrastructure. I knew this was mostly bullshit. I was saying it a year ago on here and chuckled while getting downvoted to oblivion.

It does make search engines a lot better, especially with customers own datasets. Ya know, good data.

The funniest thing to come out of all of this though, is the word Hallucinations. For bad data 😆 I get a chuckle every time I hear that one. Oh it's hallucinating hahaha

0

u/socoolandawesome 19h ago

Plenty of non-Grok/Meta AI that is very useful. Namely google, anthropic and OpenAI

2

u/Prior_Coyote_4376 19h ago

I’m more concerned with the costs than the profits.

0

u/Run_Rabbit5 20h ago

The sun doesn’t even have a brain but it still controls us to wake us up every morning.

2

u/krileon 20h ago

I wake up just fine with my brain not shutting up about random bullshit an hour before my alarm is supposed to go off. :(

0

u/Prior_Coyote_4376 20h ago

Bold of you to assume the sun wakes me up instead of the need to work

2

u/Festering-Fecal 16h ago

We need a lot of things but unfortunately if it's not profitable it's not happening.

2

u/Mysterious_Luck_1365 14h ago

One good thing (among others) that maybe might come out of this crash is excess energy. Now I know that energy generation for AI servers has isn’t all clean, but some it has to be. If many of the companies that invested in energy infrastructure go bankrupt, maybe we can buy that infrastructure for pennies on the dollar and put it to good use?

2

u/TDP_Wikii 7h ago

AI should be replacing monotonous/tedious jobs not creative jobs.

There are blue collar unions like the ILA and teamsters who are blocking technology from automating dangerous menial soulless should that should be automate, leading to tech bros to rob creatives blind.

Humanity is so fucked, humans are fighting for the right to do soul crushing labor while advocating for AI to replace the arts just so they can generate their big titty waifu.

4

u/Berova 19h ago

The hype is real, the FOMO by businesses big and small, tech and non-tech that's led to all the craziness is real, and the parallels with the internet boom and bust is real. When they, big tech and company, are throwing hundreds of billions, collectively trillions, of dollars into AI from an emotional response (panic and fear), emotion has overtaken reason, you know there is a problem. Given the scale involved, the consequences can be catastrophic for the companies involved, the economy, and society.

2

u/blofly 16h ago

Lemme guess......Because it sucks, and performs not as advertised?

2

u/jebuz_take_the_wheel 21h ago

Doesn’t need an expert to figure that out.

1

u/Niceromancer 20h ago

Cause the bubble is about to pop.

1

u/Smithy2232 21h ago

What does AI say?

13

u/ErinDotEngineer 21h ago

"Sounds like you are really hitting your stride, would you like me to start putting together a business plan to help you get started?" /s

1

u/y4udothistome 20h ago

This is one race I think coming in last will pay off

2

u/Zookeeper187 20h ago

Apple?

2

u/y4udothistome 20h ago

Exactly. Let everybody else spend $1 trillion when you come in and pick up the pieces for a fraction of the cost

1

u/CondiMesmer 14h ago

Godmother of AI

Jesus christ can we stop with this, it's getting so weird.

We don't need to rethink anything, we need AI products that actually deliver on what they're advertised to do.

If you're going to doom and gloom saying AGI is coming, then show us a product that can correctly count the number of R's in strawberry.