r/TrueReddit Official Publication 7d ago

Technology GPT-5’s Rocky Launch Underscores Broader AI Disappointments Is AI headed toward the trough of disillusionment?

https://spectrum.ieee.org/gpt-5-trough-of-disillusionment
73 Upvotes

28 comments sorted by

u/AutoModerator 7d ago

Remember that TrueReddit is a place to engage in high-quality and civil discussion. Posts must meet certain content and title requirements. Additionally, all posts must contain a submission statement. See the rules here or in the sidebar for details. To the OP: your post has not been deleted, but is being held in the queue and will be approved once a submission statement is posted.

Comments or posts that don't follow the rules may be removed without warning. Reddit's content policy will be strictly enforced, especially regarding hate speech and calls for / celebrations of violence, and may result in a restriction in your participation. In addition, due to rampant rulebreaking, we are currently under a moratorium regarding topics related to the 10/7 terrorist attack in Israel and in regards to the assassination of the UnitedHealthcare CEO.

If an article is paywalled, please do not request or post its contents. Use archive.ph or similar and link to that in your submission statement.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

23

u/theclansman22 6d ago

AI as an industry is discovering the law of diminishing returns. After telling us that models would improve “exponentially” indefinitely.

11

u/turningsteel 6d ago

Anyone who works for an AI company knows that everything said by the CEO and marketing shills is bullshit. They’re just riding the hype because they know consumer sentiment is driven by what they say. In short: they’re laughing all the way to the bank.

2

u/Coalnaryinthecarmine 5d ago

Anyone with a passing familiarity with physics ought to have known exponential growth via increasing inputs was never going to work.

27

u/plungingphylum 7d ago edited 7d ago

Yes.

Apparently I need to add more text for the automod. So, this is is not leading to amazing new efficiency or cash streams. AI is mostly used to create slop, allow students to cheat, and generally increase confusion and stupidity. The good use cases (fix this email) are just not worth it (especially given the actual expense of these models).

5

u/BeeWeird7940 6d ago

If you need to turn your qPCR CT’s into fully analyzed graphs with statistics, you can do it with excel and graphpad prism, and take about 3h to do it. Or, you could have ChatGPT do it in a few minutes. I could go through every single assay in a molecular biology lab and get accelerated results. It doesn’t matter if you want image analysis, designing experiments with controls, functional genomics, RNA-seq, lit review. Assay after assay after assay is the same story. Things that used to take hours to days now take minutes to hours.

Literally, everything I do at work is accelerated with chatGPT. And we are barely scratching the surface of what the current models are capable of. There are times when I just ask the models if they can do something that I don’t think they can do, and these things give step-by-step instructions for how to pull it off.

You just can’t use it as a calculator.

18

u/plungingphylum 6d ago

You also can't fully trust the results. Neither will your colleagues. If you want to publish it you know you're going to need to rerun it yourself. If it doesn't come out then you will always second guess yourself. That's a huge flaw.

6

u/BeeWeird7940 6d ago

You have to recheck the results you come up with by hand too. No first draft of data or text gets published. That’s never been how science works. Our lab has used these models for all kinds of data analysis. What’s really cool is a lot of the data analysis is really just the same algo run over and over. So, you have the AI build the algorithm, you check it for accuracy and you can use it for a bunch of assays.

Our institution has started putting together workshops for researchers to share their uses of AI tools, and now we have monthly seminars for visiting faculty to share what they’ve developed with these models. I’ve been in research for years. I haven’t seen any technology adopted this quickly or this widely. Maybe CRISPR?

The models still can’t pick up a pipette. Luckily, I’m still useful for something.

1

u/vineyardmike 6d ago

Why can't they add a simple call module to the overall tool? It is so annoying when a computer gets basic math wrong.

0

u/roastedoolong 5d ago

... AI is not "mostly" used to do the things you've listed.

LLMs -- a very specific type of neural net architecture (which is itself a subfield of AI research and development) -- are probably what you're referring to, but LLMs and AI are functionally equivalent to squares and rectangles. 

AI is used in everything from route prediction, product recommendations, traffic light system design, warehouse management systems, corpus organization... pretty much every single domain has some use case. not to mention, each of these fields has benefited greatly from the various technologies that have been developed.

by all means, shit on LLMs all you want, but don't conflate LLMs and AI.

-3

u/flirtmcdudes 6d ago

oh stop, AI already does amazing things. Is it worth trillions that all these companies are investing in or anywhere near AGI? Of course not, but acting like it’s a big flop is silly.

-8

u/Far-Fennel-3032 6d ago

AI in its current form I think is good enough to achieve much of its promises, the problem seems more of the implementation and pipelines on the user end. As people haven't developed skills and tools to use AI properly. 

Even small gains of efficiency like fix this email get me a summary of this document and let me ask questions about it. Are generally pretty significant productivity gain even if it's saving a few minutes here and there. It's not going to automate any jobs but time saves add up. 

6

u/reganomics 6d ago

your use cases are kinda weird from my perspective. what do you mean "fix this email". do you mean there is a subject and a couple sentences to convey a main point? grammar/spelling? is there really a benefit with a slight time save with a definitive skill drop in the human as a result? is it really faster to summarize an article or doc, which takes time to generate, rather than just read it?

0

u/Far-Fennel-3032 6d ago

A lot of people use tools like grammarly which mostly seamless provides modifications suggestion to documents in particularly emails. Llm just push the floor level up to complete sentences that are grammatically correct, this is a significant improvement, in the same way introduction of spell check as a standard was significant. The quality of writing has always been terrible, proof readers and editors are common role in a range of industries for a reason. 

On the point of summaries of documents I'm talking about shifting through either many or large documents. Say you get a document that is 100 pages long and outside your core competencies. You might want to know if the document covers some topic but you don't know the key words that might be used to do traditional key word search. So you can shove the entire document into a llm and ask if it covers the topic you care about, and if so where what pages it's on. If the document doesn't you just move onto the next one. 

This process of screening finds what I need I'll read the actually document, but if I'm screening the whole purpose is to remove documents from pile of search results rather than reading everything. You can also get into to translate jargon when needed, which is often useful for topics outside core expertise, as many topics are extremely jargon dense. 

Unless the document are extremely short and only a pages or two, but that is already a summary why would I need to summarise a summary? 

Also I'm assuming the article you are talking about are news articles, thoses are absolutely a summary of a topic on every level as they both very short and jargon filtered out for the general public's consumption. 

I'm talking about proper documents like white papers, technical documents, documentation and journal and review articles articles. Where it's uncommon for stuff to be shorter than dozens of pages and 50 pages is a short document.  

-1

u/Intendant 6d ago

We're still in the early stages of building systems around ai. Even at its current level, what it can do is going to massively change the economic landscape. The reality is that the software market has to upskill and figure out how to build with it. Which takes time. The major companies are already there, but that's about it

3

u/Apprehensive-Fun4181 7d ago

This should have been done by universities, in conjunction with an existing body of knowledge. I know that fixed approach is not new, but nothing else makes sense.  At best the "wild" discussions by professionals using electronic communication would be a valid LLM.   Which is going to be better to medicine? The chatter in the courtyard of the hospital or outside?

2

u/Far-Fennel-3032 7d ago

The current cost of llm is simply way out of scope of anything universities would be able to spend. It would have to be large scale government funded projects much like the LHC. But it's been extremely difficult lately to get the government to fund any projects like that. 

8

u/IEEESpectrum Official Publication 7d ago

More and more research is finding that AI just isn't living up to its hype. Maybe AI isn't going to take your job after all.

10

u/pwnersaurus 7d ago

We were promised fully autonomous vehicles years ago, and are still waiting. It’s just classic Pareto principle, getting 80% of the way there is easy but closing the last gap is vastly harder

2

u/nostrademons 6d ago

FWIW I take a Waymo almost every time I go up to SF. They're just more convenient than every other form of last-mile transportation, and I feel safer in them than with a human driver. Self-driving cars are here, just unevenly distributed.

-1

u/DualityEnigma 7d ago

AI is still in the R&D phase, and commercialization is still being developed. The “nike swoosh” of disillusionment is normal for all exciting technology developments in my lifetime. Right now, Devs are figuring out how to use it reliably, and cheaply.

It will replace repetitive and simple tasks and maybe some jobs, but overall this is a new and powerful tool for humanity. I expect change to continue to accelerate.

5

u/Bortcorns4Jeezus 6d ago

None of these companies can make money and they have no idea how to be profitable. Their computing and training costs are way too high. I once heard that every query you submit to OpenAI costs them between 2 and 4 dollars. They LOSE MONEY on their $200/month subscriptions

At some point investors will stop throwing money at this 

-1

u/MagicWishMonkey 6d ago

Training costs are high but the cost of a query is basically nothing (comparable to watching a video on youtube or running a search engine query). Think of a model like cake, all the energy goes into baking it (training). It'll be interesting to see if these big companies think the best path forward is to continue to sink billions (or even trillions) of dollars into training new models with marginal returns, or to take the already existing models and find new and novel ways to make them work better.

0

u/MagicWishMonkey 6d ago

I expect the way people work 5-10 years from now will be significantly different than today, I think whatever jobs are lost will be made up for in other areas, hopefully the people left behind will have an easier time upskilling than in previous disruptie eras.

I would say the government should step in to assist but we all know that will never happen.

3

u/AwfulishGoose 6d ago

I hope so. Looking forward to the bubble popping and these people losing their jobs.

1

u/siraliases 6d ago

People really don't like that for every high there's a low

-1

u/blueskiess 6d ago

Overvalued in the short term, undervalued in the long term