- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.
The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.
Wait, we have AI flying planes now?
what do you think an autopilot is?
A finely refined model based on an actual understanding of physics and not a glorified Markov chain.
To be fair, that also falls under the blanket of AI. It’s just not an LLM.
No, it does not.
A deterministic, narrow algorithm that solves exactly one problem is not an AI. Otherwise Pythagoras would count as AI, or any other mathematical formula for that matter.
Intelligence, even in terms of AI, means being able to solve new problems. An autopilot can’t do anything else than piloting a specific aircraft - and that’s a good thing.
Not sure why you’re getting downvoted. Well, I guess I do. AI marketing has ruined the meaning of the word to the extent that an if statement is “AI”.
Someone will be around to say “not real AI”, and I think that’s the wrong way to look at it.
It’s more “real AI” thank the LLM slop companies are desperately trying to make the future
Mild height and bearing corrections.
I know you’re joking, but for those who don’t, the headline means “startups” and they just wanted to avoid the overused term.
Also, yeah actually it’s far easier to have an AI fly a plane than a car. No obstacles, no sudden changes, no little kids running out from behind a cloud-bank, no traffic except during takeoff and landing, and those systems also can be automated more and more.
In fact, we don’t need “AI” we’ve had autopilots that handle almost all aspects of flight for decades now. The FA-18 Hornet famously has hand-grips by the seat that the pilot is supposed to hold onto during takeoff so they don’t accidentally touch a control.
Conversely, AI running ATC would be a very good thing. To a point.
It’s been technically feasible for a while to handle 99% of what an ATC does automatically. The problem is that you really want a human to step in on those 1% of situations where things get complicated and really dangerous. Except, the human won’t have their skills sharpened through constant use unless they’re handling at least some of the regular traffic.
Trick has been to have the AI do, say, 70% of the job, but having a human step in sometimes. Deciding on when to have a human step in is the hard problem.
That’s terrifying, but I don’t see why my regional train can’t drive on AI in the middle of the night.
It’s a bubble. This article is by someone realizing that who has yet to move their investments.
Shocked that LLM wrapper slop that isn’t deterministic only has limited use cases. Sam Altman is the biggest con artist of our time
He’s the second coming of Joseph Smith.
JS was a charismatic grifter by nature and upbringing who sold folks on the existence of a magic gold book that had extra-special info about American Jesus. He told them he found it after G*d told him where to dig.
This was just a few years after he had been hauled into court to face charges of running a ‘treasure hunting’ scheme on local farmers.
Now that I think about it more, the parallels are many.
In conclusion, shysters gonna shyst.
A few years ago we haf these stupid mandatory AI classes all about how AI could help you do your job better. It was supposed to be multiple parts but we never got passed the first one. I think they realized it wouldn’t help most of the company but did leave our bespoke chatbot up for our customers/sales people. It is pretty good at helping with our products but I assume a lot of tuning has been done. I assume if we fed a local AI our data we could make it helpful but none of them have more than a basic knowledge of anything I do on a day to day basis.
Usually fit those chatbots you take a trained model and use RAG, essentially turning the question into a traditional search and asking the LLM to summarize the contents from the result. So it’s frequently a convenient front end to a search engine, which is how it avoid s having to train to produce relevant responses. Is generally just prohibitively difficult in various ways to fine tune LLM through training and manage to get the desired behavior. So it can act like it “knows” about the stuff you do despite zero training if other methods are stuffing the prompts with the right answers.
deleted by creator
Decent article with a b. S agenda.
Its aimed at ages. Younger js better according to the article. So instead of focusing on what the issues with fucking a I are, they get to bring in ages.
As soon as they start that shit, you I know its to distract from the real issues
If that’s what you actually intended to type, you might have a stroke.
And here’s another bigot.
Why’s it the most intolerant who are biased against age?
Maga has nothing on you guys when it comes to agism.
Your both wrong
And the other 5% are bullshitting.
I have an extremely small company where I am the only employee and AI has let me build and do stuff that I would have needed a small team for the quality of what I went from to what I’m able to do now is really great and it’s thanks to ai.
I have no formal training for work experience in coding, but I taught myself python, years ago. Additionally, I don’t work in IT, so I think using ai to code has been extremely beneficial.
So you’re saying you have no professional coding experience, yet you know that a team of professionals couldn’t produce code at the quality you want?
Also, saying “extremely small company” when you mean self employed is weird. It’s fine to have your own company for a business/contracting.
I just hope you actually understand the code that has been produced and no customer or company data is at risk.
Financially, I earn a really low amount. I e been freelance for a while, but am trying to grow the business, so it’s extremely small.
All the stuff I’m using AI for is just for presentation of internal materials. Nothing critical.
Yup. The absolute only useful task I’ve found it to handle is code documentation, which is as fast as it’s allowed to travel in my sphere.
That’s great but you’re not what this article and is about. There are tens of thousands of companies popping up left and right with far less ambition to succeed who just want to launch the next “AI powered toaster” and are hoping to make a fast buck and get bought out by a larger company like Google or OpenAI or Meta.
Combine that with growing public skepticism of AI and a general attitude that it’s being overused, the same attitude that makes you knee-jerk defensive about your business, an attitude which is growing and people are losing interest in AI broadly as a feature because it’s being overplayed and over-hyped and not delivering promises. This makes for a bubble that is growing, a bubble with nothing inside that becomes more and more fragile every day. Not everyone is a successful vibe-coder nor can they be.
I think you have blatant security holes that threaten your bottom line and your customers.
Good. How do we fix the surviving 5%?
5% to me sounds excellent. Companies fail all the time with well established technologies and nobody should expect great results with something still so new. It’s a bet with high risks and high rewards: most people will simply fail.
If you care about AI development you would care a lot about the entire industry getting wrecked and set back decades because a bursting bubble and lack of independent funding.
This isn’t just about AI either, when an industry valued nearly half a trillion dollars crashes, it takes with it the ENTIRE FUCKING ECONOMY. I have lived through these bubbles before, this one is bigger and worse than any of them.
You won’t get your AI waifus if you have no job and nobody is hiring developers for AI waifus.
Like any technology before it, now we are beyond the hype in the area where lots of clueless people expects miracles and complain about the number of Rs in strawberry.
In one year or two it will be a regular tool like any other.
Okay but we’re talking about economics here, not the “tool” specifically. I think some people are so hung up on knee-jerk defensiveness of AI that they lose sight of everything but promoting it.
About 90% of tech startups fail. It happens all the time because it’s in the nature of innovation.
Here anything about AI is received negatively so a 5% success rate is the “demonstration” that it’s a bubble. I’m sorry if you hope so, but it’s not. 5% is not far away from the normal failure rate of new companies and here we are talking about early adopters who buy a lottery ticket trying to be the first that makes it work.
Feel free to believe the contrary. I don’t need to convince an anonymous guy on internet.
I’m sorry if you hope so,
Arguing that there’s an economic scheme threatening AI development and you translate it as “hope” that there is going to be an economy-destroying bubble burst, tells me I won’t get far in this conversation. Maybe figure out if there’s a less emotional/defensive path for looking at all this.
Maybe figure out if there’s a less emotional/defensive path for looking at all this
If you start with
I think some people are so hung up on knee-jerk defensiveness of AI that they lose sight of everything but promoting it.
you should expect to be seen as one of those with that irrational hate for a technology.
Back in topic: can this be a bubble like the dot-com? Of course. Is it? Probably not.
95% of failures should be a warning for all those fools who expects something that this technology cannot do. Nothing more than that.
So you’re one of the “some people” got it.