This is because they're trying to reduce the wrong headcount. The largest inefficiencies in corpo orgs lie in the ways they organize their knowledge and information stores, and in how they manage decision making.
The rank and file generally have a really good grasp on their subset of the domain -- they have expertise and experience, as well as local context. Small teams, their managers -- those are the ones who actually perform, and deliver value.
As you move up the hierarchy, access to information does not scale. People in the middle are generally mediocre performers, buried in process, ritual and politic. In addition to these burdens, the information systems do their best to obscure knowledge, with the usual excuses of Safe and Secure (tm) -- things are siloed, search does not work, archives are sunsetted, etc.
In some orgs tribalism also plays an outsized role, with teams acting competitive, which largely results in wasted resources and seven versions of the same failed attempt at New Shiny Thing.
Then as we look higher yet in the hierarchy, the so-called decision makers don't really do anything that cannot be described as "maximize profit" or "cut costs", all while fighting not to get pulled down by the Lord of the Flies shenanigans of their underlings. They are the most replaceable.
A successful "AI Transformation" would come in top-down, going after the most expensive headcount first. Only truly valuable contributors would remain at that level. Organizational knowledge bases would allow to search, analyze and reason about the institutional knowledge accrued in corporate archives over the years, enabling much more effective decision making. Meanwhile, the ICs would benefit from the AI boost, outsourcing some menial tasks to the machine, with the dual benefit of levelling up their roles, and feeding the machine more context about the lower-level work done across the org.
I think another barrier is that end users don't trust IT to not pull the rug out from under us. It's quite a bit of effort to learn and figure out workflows for actually getting work done and IT doesn't tend to give a shit about that. Particularly enterprise IT's attitude about trials can kiss my ass. Enterprise IT has their timeline, and I have my deadlines. I'll get to it when I have time.
But particularly we're always dealing with IT security "experts" running dumb checklists taking things away and breaking everything and never bothering figure out how we're supposed to use computers to actually get any work done ("hmmm. we didn't think about that... we'll get back to you" is a common response from these certified goons). Apparently the security gods have decided we can't have department file servers anymore because backups are too difficult to protect against ransomware or something so we're all distracted with that pronouncement from the mountain trying to figure out how to get anything done at the moment.
I've wondered sometimes what the root of this dynamic is, and why corporations are as inefficient as they are. I've come to the conclusion that it's deliberate.
When I look at top-level decision-makers at my Mag-7 employer, they are smart people. Many of them were go-getters in their earlier career, responsible for driving some very successful initiatives, and that's why they're at the top of the company. And they're very intentional about team structure: being close enough to senior directors and VPs to see some of their thinking, I can tell that they understand exactly who the competent people are, who gets things done, who likes to work on what, and then they put those people at the bottom of the hierarchy with incompetent risk-averse people above them. Then they'll pull them out and have them report directly to a senior person when there's a strategic initiative that needs doing, complete it, and then re-org them back under a middle-manager that ensures nothing gets done.
I think the reason for this is that if you have a wildly successful company, the last thing you want to do is screw it up. You're on top of the world, money is raking in from your monopoly - and you're in zugzwang. Your best move is not to play, because any substantive shift in your product or marketplace risks moving you to a position where you aren't so advantaged. So CEOs of successful companies have a job to do, and that job is to ensure that nothing happens. But people's natural inclination is to do things, and if they aren't doing things inside your company they will probably be doing things outside your company that risk toppling it. So you put one section of the company to work digging holes, and put the other section to work filling them in, and now everybody is happy and productive and yet there's no net external change to your company's position.
Why even have employees then? Why not just milk your monopoly, keep the team lean, and let everybody involved have a big share of the profits? Some companies do actually function like this, eg. Nintendo and Valve famously run with fairly small employee counts and just milk their profits, some HFT trading shops like RennTech just give huge employee dividends and milk their position.
But the problem is largely politics. For one, owning a monopoly invites scrutiny; there are a lot of things that are illegal, and if you're not very careful, you can end up on the wrong side of them. Two, owning an incredibly lucrative business makes you a target for competition, and for rule-changes or political action that affect your incredibly lucrative business. Perhaps that's why examples of highly-profitable businesses that stay small often involve staying secret (eg. HFT) or being in an industry that everybody else dismisses as inconsequential (eg. gaming or dating).
By having the huge org that does nothing, the CEO can say "Look, I provide jobs. We're not a monopoly because we have an unfair advantage, we compete fairly and just have a lot of people working very hard." And they can devote a bunch of people to that legal compliance and PR to make sure they stay on the right side of the government, and it also gives them the optionality to pull all those talented people out and unmuzzle them when there actually is a competitive threat.
> "Why even have employees then? Why not just milk your monopoly, keep the team lean, and let everybody involved have a big share of the profits?"
So we're seeing this play out. There are two factors that exist in tension here:
- The valuation of many of these companies depend on the perception that they are The Future. Part of that is heavy R&D spending and the reputation that they hire The Best. Even if the company mostly just wants to sit and milk its market position, keeping the stock price afloat requires looking like they're also innovative and forging the future.
- Some companies are embracing the milk-it-for-all-its-worth life stage of their company. You see this in some of the Mag-7 where compensation targets are scaling down, explicit and implicit layoffs, etc. This gear-shifting takes time but IMO is in fact happening.
The tightrope they're all trying to walk is how to do the latter without risking their reputation as the former, because the mythos that they are the engines of future growth is what keeps the stock price ticking.
> So CEOs of successful companies have a job to do, and that job is to ensure that nothing happens. But people's natural inclination is to do things, and if they aren't doing things inside your company they will probably be doing things outside your company that risk toppling it. So you put one section of the company to work digging holes, and put the other section to work filling them in, and now everybody is happy and productive and yet there's no net external change to your company's position.
i work at a large music streamer and this perfectly describes my workplace. when i was outside i never understood why that company needs thousands and thousands of ppl to run what looks like stagnant product that hasn't changed much in years.
Yeah I have. Was a major influence on my thinking, though like all models, it's incomplete. A lot of my comment is filling in the holes in that series - what are the mechanisms by which sociopaths make the organization function? Why does the system as a whole function like this?
these articles uses to be so popular in HN back in the day. now its back to "let pretend we are meritocracy. politics is just learning how to work with other ppl" .
> This is because they're trying to reduce the wrong headcount.
> A successful "AI Transformation" would come in top-down, going after the most expensive headcount first.
This isn't a mistake. McKinsey consultants and their executives at their clients are a part of the same clique. You don't get into either without going to the right schools, being in the right fraternities, and knowing the right people. "Maximize profit" and "cut costs" are to be read as "keep the most money for ourselves in the form of earnings per share and dividends" and "pay fewer people". And since you can convert shares to money by gutting companies, there's no real incentive to remain competitive in the greater marketplace.
McKinsey has pitched my company on projects where their compensation is entirely outcome-based — for example, if a project generates $20 million in incremental revenue, they would earn 10% of that amount.
I have to admit, the results they demonstrated — which we validated using our own data — were impressive.
The challenge, however, is that outcome-based contracts are hard for companies to manage, since they still need to plan and budget for potential costs upfront.
So even when you have measurable benefits - it's still not so easy either.
EDIT:
To clarify the issue — companies are used to budgeting for initiatives with fixed costs. But in an outcome-based contract, the cost is variable.
As a result, finance teams struggle to plan or allocate budgets because the final amount could range widely — for example, $200K, $2M, or even $20M — depending on the results achieved.
Additionally, you almost then need a partial FTE just to manage these contracts to ensure you don't overpay because the results are wrongly measured, etc.
None of these challenges are insurmountable, but it's also not easy for companies either.
Perhaps their advice needs expenditure up front - for example if they suggested using blue photocopiers and you only have pink ones. You would have to spend the money on blue photocopiers before you see the return, and before they see their services fee paid?
I'd imagine the opportunity cost and man power. Even though McKinsey should do the work they will need access to people and information to accomplish it.
> they still need to plan and budget for potential costs upfront
Same reason they ask for "estimates" which they then later try to hold accountable as "quotes" when it suits them. Same reason I 3x my initial estimates.
How is that hard? They put 90% of their estimated revenue as net revenue (post-McK tax) in the budget? Seems about as hard as the underlying problem, which is guessing ("forecasting") the revenue.
> while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."
I'm surprised McKinsey convinced someone to say the quiet part out loud
The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization.
No one wants to say on their resume, "I manage 5 people, but trust me, with AI, its like managing 20 people!"
Managers also don't pay people's salaries. The Tech Tools budget is a different budget than People salaries.
Also keep in mind, for any problem space, there is an unlimited number of things to do. 20 people working 20% more efficiently wont reach infinity any faster than 10 people.
Maybe 40 years ago or in some cultures, but I've always focused on $ / person. If we have a smaller team that can generate $2M in ARR per developer that's far superior to $200K. The problem is once you have 20 people doing the job nobody thinks it's possible to do it with 10. You're right that "there is an unlimited number of things to do" and there's really obvious things that must be done and must not be done, but the majority IME are should or could be done, and in every org I've experienced it's a challenge to constrain the # of parallel initiatives, which is the necessary first step to reducing active headcount.
I don't mean to be dismissive and crappy right out of the gate with that question, I'm merely drawing on my experience with AI and the broader trends I see emerging: AI is leveraged when you need knowledge products for the sake of having products, not when they're particularly for something. I've noticed a very strange phenomenon where middle managers will generate long, meandering report emails to communicate what is, frankly, not complicated or terribly deep information, and send them to other people, who then paradoxically use AI to summarize those emails, likely into something quite similar to what was prompted to be generated in the first place.
I've also noticed it being leveraged heavily in spaces where a product existing, like a news release, article, social media post, etc. is in itself the point, and the quality of it is a highly secondary notion.
This has led me to conclude that AI is best leveraged in such cases where nobody including the creator of a given thing really... cares much what the thing is, if it's good, or does it's job well? It exists because it should exist and it's existence performs the function far more than anything to do with the actual thing that exists.
And in my organization at least, our "cultural opinion" on such things would be... well if nobody cares what it says, and nobody is actually reading it... then why the hell are we generating it and then summarizing it? Just skip the whole damn thing and send a short, list email of what needs communicating and be done.
The anthropologist David Graeber wrote a book called "Bullshit Jobs" that explored the subject. It shouldn't be surprising that a prodigious bullshit generator could find a use in those roles.
He's either lying or hard-selling. The company in his profile "neofactory.ai" says they "will build our first production line in Dallas, TX in Q3." well, we just entered Q4, so not that. Despite that it has no mentions online and the website is just a "contact us" form.
> The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization
Ding ding ding!
AI can absolutely reduce headcount. It already could 2 years ago, when we were just getting started. At the time I worked at a company that did just that, succesfully automating away thousands of jobs which couldn't pre-LLMs. The reason it ""worked"" was because it was outsourced headcount, so there was very limited political incentive to keep them if they were replaceable.
The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it. This is by far the biggest cause of all these "failed" AI projects. It's super obvious when you start noticing that for jobs that were being outsourced, or done by temp/contracted workers, those are much more rapidly being replaced. As well as the fact that tech startups are hiring much less than before. Not talking about YC-and-co startups here, those are global exceptions indeed affected a lot by ZIRP and what not. I'm talking about the 99.9% of startups that don't get big VC funds.
A lot of the narrative on HN that it isn't happening and AI is all a scam is IMO out of reasonable fear.
If you're still not convinced, think about it this way. Before LLMs were a thing, if I asked you what the success rate of software projects at non-tech companies was, what would you have said? 90% failure rate? To my knowledge, the numbers are indeed close. And what's the biggest reason? Almost never "this problem cannot be technically solved". You'd probably name other, more common reasons.
Why would this be any different for AI? Why would those same reasons suddenly disappear? They don't. All the politics, all the enterprise salesmen, the lack of understanding of actual needs, the personal KPIs to hit - they're all still there. And the politics are even worse than with trad. enterprise software now that the premise of headcount reduction looms larger than ever.
Yes, and it’s instructive to see how automation has reduced head count in oil and gas majors. The reduction comes when there’s a shock financially or economically and layoffs are needed for survival. Until then, head count will be stable.
Trucks in the oil sands can already operate autonomously in controlled mining sites, but wide adoption is happening slowly, waiting for driver turnover and equipment replacement cycles.
> The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it.
I don't know, most of the companies doing regular layoffs wheneveer they can get away with it are pretty big and old. Be it in tech - IBM/Meta/Google/Microsoft, or in physical things - car manufacturers, shipyards, etc.
Through top-down, hard mandates directly by the exec level, absolutely! They're an unstoppable force, beating those incentives.
The execs aren't the ones directly choosing, overseeing and implementing these AI efforts - or in the preceding decades, the software efforts. 9 out of 10 times, they know very little about the details. They may ""spearhead"" it in so far that's possible, but there's tonnes of layers inbetween with their own incentives which are required to cooperate to actually make it work.
If the execs say "Whole office full-time RTO from next month 5 days a week", they really don't depend on those layers at all, as it's suicide for anyone to just ignore it or even fake it.
I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites, and deliver approximately same value. It would provide additional benefits, in that AIs refuse to do unethical things and generally reason acceptably well. The cost cutting would be immense!
I am not kidding. In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki. The current AIs are perfect for this.
> In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki.
It's hard to take this sentiment seriously from a source that doesn't have direct experience with the c-suite. The average person only gets to see the "public relations" view of the c-suite (mostly the CEO) so I can certainly see why a "LLM based mouthpiece" might be better.
The c-suite is involved in thousands of decisions that 90% of the rest of the world is not privy to.
FWIW - As a consumer, I'm highly critical of the robotic-like external personas the c-suite take on so I can appreciate the sentiment, but it's simply not rooted in any real experience.
> I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites
That is exactly what it can't do. We need someone to hold liable in key decisions.
It's not the top IME, but the big fat middle of the org chart (company age seems to mirror physical age maybe?) where middle to senior managers can hide out, deliver little demonstratable value and ride with the tides. Some of these people are far better at surfing the waves than they are at performing the tasks of their job title, and they will outlast you, both your political skills and your tolerance for BS.
Can it turn simple yes-or-no questions, or "hey who's the person I need to ask about X?" into scheduled phone calls that inexplicably invite two or three other people as an excuse to fill up its calendar so it looks very busy?
> AI in its current state will likely not replace any workers.
This is a puzzling assertion to me. Hasn’t even the cheapest Copilot subscription arguably replaced most of the headcount that we used to have of junior new-grad developers? And the Zendesks of the world have been selling AI products for years now that reduce L1 support headcount, and quite effectively too since the main job of L1 support is/was shooting people links to FAQs or KB articles or asking them to try restarting their computer.
> Pretty soon we will have articles like "That time that CEO's thought that AI could replace workers".
Yup, it's just the latest management fad. Remember Six Sigma? Or Agile (in its full-blown cultish form; some aspects can be mildly useful)? Or matrix management? Business leaders, as a class, seem almost uniquely susceptible to fads. There is always _some_ magic which is going to radically increase productivity, if everyone just believes hard enough.
I mean, nah, we've seen enough to these cycles to know exactly how this will end.. with a sigh and a whimper and the Next Big Thing taking the spotlight. After all, where are all the articles about how "that time that CEOs thought blockchain could replace databases" etc?
I think they can. IME LLMs have me working somewhat less and doing somewhat more. It's not a tidal wave but I'm stuck a little bit less on bugs and some things like regex or sql I'm much faster. It's something like 5-10% more productive. That level of slack is easy to take up by doing more but theoretically it means being able to lose 1 out of every 10-20 devs.
both make a lot of sense, but the biggest mistake they make is to see people as capacity, or as a counter.
Each human can be a bit more productive, I fully believe 10-15% is possible with today's tools if we do it right. But each human has it unique set of experience and knowledge. If I do my job a bit faster, and you do your job a bit faster. But if we are a team of 10, and we do all our job 10% faster, doesn't mean you can let one of us go. It just means, we all do our job 10% faster, which we probably waste by drinking more coffee or taking longer lunch breaks
Organizations that successfully adapt are those that use new technology to empower their existing workers to become more productive. Organizations looking to replace humans with robots are run by idiots and they will fail.
How does it make sense to trade one group of labor (human) who are generally loosely connected, having little collective power for another (AI)? What you're really doing isn't making work more "efficient", you're just outsourcing work to another party -- one who you have very little control over. A party that is very well capitalized, who is probably interested in taking more and more of your margin once they figure out how your business works (and that's going to be really easy because you help them train AI models to do your business).
That's not required. All that is required is becoming a sole source of labor, or a source that is the only realistic choice economically.
If you ask me, that's the real long game on AI. That is exactly why all these billionaires keep pouring money in. They know it's the only way to continue growth is to start taking over large sections of the economy.
At my company people always understate the headcount savings. Because the invariable question is - "You are spending x million and for y FTEs you save only 1 FTE of HC? How does that make sense?".
Or worse yet - "You estimated 40 FTE savings, why don't we pick and chose 40 FTEs to let go". That sends shivers down managers as it reduces their area of influence.
They found a hack and that is loading up the intangible column. In that list reputation/brand risk always makes an appearance. "If you don't do this project this terrible thing might happen and we might suffer reputational risk. We estimate 2x millions of loss due to reputation being harmed". And presto! there is a case for the project.
Its like cloud migration project. Total run costs are part of the pitch but you add on extra things like added security, automatic updates etc. it becomes an easier sell.
AI tools being so hyper focused on "productivity gains" it is going to be tough sell. Especially because users will resist it and the productivity boosts if any will remain low.
I recently talked to someone who works at a company that builds fairly complicated machinery (induction heating for a certain material processing). He works in management and they did a week long workshop with a bunch of the managers to figure out where AI will make their company more efficient. What they came up with was that they could feed a spec from a customer into an AI and the AI will create the CAD drawings, wiring diagrams, software etc. by itself. And they wrote a report on it. And I just had to break it to him: The thing that AI is actually best at, is replacing these week-long workshop where managers are bs-ing around to write reports. Also, it shouldn't be the managers doing a top down approach where to deploy AI. Get the engineers, technicians, programmers etc. and they should have a workshop to make plans where to use AI, because they probably already are experimenting with it and understand where it works well and where it doesn't quite cut it yet.
I hadn't ever tried Notion before but I sort of vaguely understood it was a nice way to make some documentation and wiki type content. I had a need for something like a table that I could filter that I would normally just do in Google Sheets. So I go check out Notion and their entire site is focused on AI. Look at what this agent can do, or that. I signed up and the entire signup flow is also focused on AI. Finally I was able to locate what I thought was their core offering - the wikis etc. And ended up pretty impressed with the features they have for all of that.
Now maybe Notion customers love all these AI features but it was super weird to see that stuff so prominently given my understanding of what the company was all about.
Redis had (may still have?) a billboard on the 101 saying something along the lines of, "my boss really wants to you know we're an AI company", which I thought was pretty funny. Hope this bubble pops soon and we can go back to making products that solve problems for people.
Approximately 95% of my experience using "AI" so far is as something I accidentally activate then waste a few seconds figuring out how to make it stop. What little I've seen of other people's experiences with it on e.g. screen sharing calls mirrors my own. I saw someone the other day wrestling with Microsoft's AI stuff while editing a document and it was comically similar to Clippy trying to help but just fucking things up, except kinda worse because it was a lot less polite about it.
(And I develop "AI" tools at my day job right now...)
the sad part is that it wasnt entirely nonsensical to use AI to improve notion's use as a knowledge base but the way they actually used it was in the most hamfisted ways possible.
The startup I work at is doing the same strategy pivot, we’re integrating AI into every feature of the platform. Every textbox or input field has the option to generate the value from AI. Feature that no one used when it was a simple form with a button can now be done through our chatbot. We have two key product metrics for the entire company and one of them is how many AI tokens our users are generating.
I'm a heavy Notion user and haven't once used the AI features. I use AI on a near-daily basis outside Notion, but it just isn't something I need from Notion. On the other hand at least it isn't that intrusive in Notion unlike in some other apps.
Notion customer here and their AI crap keeps interrupting my workflow. Pretty stupid move on their part because they have motivated me to ditch the subscription.
I was a IC consultant for a big 4 group at one point.
Very successful in my domain on a very successful project.
I wrote an insane amount of code but more importantly I wrote libraries across multiple languages that prevented an insane amount of code from being written.
We would have literally 1k people during quarterly planning and did distributed agile and all this org stuff
(It was interesting anthropologically to me because I operated outside the game, I was just waiting for a ~non-compete I had signed for a profitable technical co-founder exit to end to jump back into starting a new company in the same space.)
And the whole thing worked, and I was very high profile on the project as probably the highest paid IC and the company hired me away from the agency and I worked their until starting my company.
There are 3 layers, the deal makers, the coordinators and the implementers.
You cannot easily automate out the deal makers because they are trust, legal/contracting and power (they use the resources of the firm to allocate: people, resources, etc.) loci. Someone has to hang if stuff goes wrong and someone has to deal with executive petulance and fragile egos.
Now lets look at the middle layer and implementers.
Let's assume for a minute we are looking at a big project where the existing company has hamstrung itself with silos and infighting and low productivity teams, this is just framing to understand the next part, it can cut either way.
The middle layer in consulting is big because other companies have big middle layers as well, and basically what is happening is tribal warfare, you need bodies and voices and change management teams to propagate what is happening otherwise the existing group will slow play the leadership and the project never gets done. If the middle layer is 1 to 1 every native sees they can be replaced. Many big 4 allow poaching for this very reason. The threat of non compliance and then also giving an easy congenial out for people who are ready to exit the consultant lifestyle.
The implementation layer is then able to be done, and it's done by juniors mostly because juniors don't have to be politically savvy, they can work to task.
Just a small slice or things I realized while consulting.
> Many software firms trumpet potential use cases for AI, but only 30 percent have published quantifiable return on investment from real customer deployments.
This kind of "data driven" corporate stuff is, IME, so bullshitty and hand-wavy that I'd assume if only 30% are able to claim to have found quantifiable ROI (most of them with laughably bad methodology a slightly-clever 10th grader who half paid attention in their science and/or stats classes could spot) it means that only 5% or fewer actually found ROI.
This means that only 30% are even _claiming_ to have shown anything quantifiable. Given that such claims tend to be essentially puffery, the _real_ rate is presumably far lower.
> The firm’s earlier research suggested that 2027 would be the first year when AI technology would be able to match the typical human’s performance in tasks that involve “natural-language understanding.” Now, McKinsey reckons it will happen this year."
> "Generative AI will give humans a new “superpower”, and the economy a much-needed productivity injection, said Lareina Yee, a senior partner at the firm and chair of McKinsey Technology, in the report.
I am curious if the timing have impacted the inability to measure a benefit. AI is rolling out at the same time as widespread return to office campaigns. Remote work was widely studied and touted as improving efficiency, but no one is showing the drop for RTO. Is AI in part just balancing it out? There's also an ongoing massive brain drain. Many companies are either laying off their most tenured and competent employees, or they are making life miserable for them in the hopes that they quit.
All of this said, using AI in your back end takes a huge amount of time from your users and employees. You have to vary multiple prompts, you have to make the output sane, touch it up, etc. The most useful part of AI for me has been using it to learn something new, or push through a task that I otherwise couldn't do. I was able to partially rewrite a logging window to reduce CPU use significantly. It took me over two weeks of back and forth with AI to figure out a workable solution and implement it into the software. I competent programmer probably could have done it better than I did in less than an hour. There's no business benefit to a help desk person being able to spend 2 weeks writing code that an engineer would be much better suited to handling. But maybe that engineer could write it in 10 minutes instead of an hour if they used AI to understand the software first.
Likely, no. In my industry, I see a fraction of ICs using it well, a fraction of leadership using it for absolute dog shit idea generation, and the remainder using it to make their jobs easier in the short run, while incurring debt in the long run since nobody is "learning" from AI summaries and most people don't seem to be reading the generated "AI notes" sent in emails.
By and large, I think AI is going to hurt my workplace based on the current trajectory, but it won't be realized until we are in a hard hole to dig out of.
Occurs to me that AI is a fundamental threat to the likes of McKinsey. You bring in the consultants when you want to make a decision but don't want any of the responsibility for making it. In the future they'll just give that task to an anonymous AI. "Nothing we can do!"
There are many aspects of managing a product that are difficult besides the existence of general demand. For example, how to compete with other businesses selling the same thing.
We have to accept that sometimes technology that was envisioned to change the future one way, may be beneficial in other ways instead - and that's okay. We are very clearly still in the phase of "throw AI at everything and see where it is useful." For example, just yesterday I was sent a contract to sign via DigiSign. There was a "Summarize contract with AI" button. Having read the contract in full, I was curious how good the summary would be. The summary was very low fidelity and did not go into the weeds of the contract and I would be essentially signing the contract blind. Although AI is pretty good at summarizing key point of things like articles and conversations, this was a very poor use case imho. But hey, they tried it and hopefully see it is a waste. Nothing wrong with iterating we just have to converge on acceptable use cases.
Now you’ve got me wondering which is worse: signing with the briefest 15-second skim, or signing based on a cheap AI summary. Setting aside tinfoil hat ideas (Prompt: “Downplay any possible causes for concern…”) I’m not actually sure the AI option is worse. I mean, I’ll fully admit I don’t read everything I sign. When you buy a house it’s like 1,000 pages of stuff. Apple or Google’s mandatory TOS is what, 100 pages single-spaced?
What I’d be more interested in is an AI paralegal that works for me, not for the signing tool or the counterparty, where I control its prompt so I can try to focus it on possible ways I can be screwed with this contract, and what recourses I will or won’t have.
Easy to imagine that many organizations using it don't necessarily want the signees to really read the document in full anyway, much less get an informative summary with Reasons To Be Cautious of Signing as one of the summary categories.
What most companies and CEOs fail to grasp is that with all the talk of headcount cuts from AI customers are expecting that AI will LOWER pricing and costs, not raise it. Challenge is that the cost cutting story is mostly vaporware (as many other studies have shown) so CEOs are in a tough spot. They can’t both boast to shareholders at how much cost savings they got from rolling out AI and then charge customers more.
All this is pretty textbook setup for how this bubble finally implodes as companies fail to deliver on their AI investments and come under fire from shareholders for spending a ton with little return to show for it.
I can't wait until the AI vendors start charging according to the true costs of their tools (+ profit margin). Let's see what the cost savings are then.
Additionally once the AI vendors have locked these companies to their ecosystem, the enshittification will start and the companies who reduced their headcount to bare minimum will start to see why that was a really, really bad idea.
- The moment AI is actually good enough to replace us, it will also be incredibly easy to create new software/apps/whatever. There could/would be a billion solo dev SAAS companies eating the lunch of every traditional tech org.
- People (Executives) seem to underestimate just how much of the work is iterating and refining a product over a long time. Getting an LLM good enough to complete a Jira task is missing the point.
- IMO LLM's are also completely draining the motivation of workers. A lot of software devs are intrinsically motivated by solving the problem. If your role is being watered down to "prompt the chat bot and baby sit what comes out", the motivation disappears. This also absolutely destroys any of the creativity/discovery that comes out of solving the task hands-on.
Your perspective on your last point is interesting. I actually feel the opposite, its become a motivator for me.
I used to love coding, and did it a ton. Then it became less and less part of my job, and I started hating coding. It was so frustrating when I knew exactly what needed to be done in the code, but had to spend the time doing low value stuff like typing syntax, tracing through the code to find the right file to edit, etc when I'm already strapped for time.
LLMs and agentic coding tools have allowed me to not spend time on the low-value tasks of typing, but instead on the high-value tasks of solving problems like you mentioned. Just interesting the different perspectives we have.
That's a fair point and I actually agree with you. A large part of writing code is doing something menial as you said.
I think both of the viewpoints are valid depending on where you're at in your career.
We can imagine a junior developer who isn't quite bored with those low-value tasks just yet.
As you grow more senior/experienced, the novel problems become harder to find - and those are the one's you want to work on. AI can certainly help you cut through the chaff so you have more time to focus on those.
But trends are trends and AI is increasingly getting better at solving the novel/interesting problems that I think you're referring to.
Everyone's different and I know there are folks who are excited to not have to write a single line of code. I'd wager that's not actually most engineers/developers though.
People still garden by hand because it's innately satisfying.
> For every $1 spent on model development, firms should expect to have to spend $3 on change management, which means user training and performance monitoring
I think the general point here is true, but it's also brilliant framing from a company selling consulting services.
> Price levels: How should vendors set price levels when the cost of inferencing is dropping rapidly? How should they balance value capture with scaling adoption?
This is written for B2B target clients as if it's pulling back the veil on pricing strategy and negotiating. Hire McKinsey to get you the BEST™ deal in town.
While most Mac features I loved 15 years ago are pretty well enshittified now, the Preview app is something I wish Microsoft would fully clone. Edge serves as a halfway-decent viewer, but there isn’t a good reason for Windows not to have something like Preview.
the hype surrounding AI is exaggerated, but a good deal of the tools is not providing real value. Customers are observing that their costs are increasing without the expected productivity gains or layoffs. The absence of quantifiable ROI and obscure pricing structures are major deterrents. It will be difficult to convince people about the real potential of AI until the sellers establish unmistakable advantages and fix the pricing.
AI in its present form is probably the strangest and the most paradoxical tech ever invented.
These things are clearly useful once you know where they excel and where they will likely complicate things for you. And even then, there's a lot of trial and error involved and that's due to the non-deterministic nature of these systems.
On the one hand it's impressive that I can spawn a task in Claude's app "what are my options for a flight from X to Y [+ a bunch of additional requirements]" while doing groceries, then receive a pretty good answer.
Isn't it magic? (if you forget about the necessity of adding "keep it short" all the time). Pretty much a personal assistant without the ability of performing actions on my behalf, like booking tickets - a bit too early for that.
Then there's coding. My Copilot has helped me dive into a gigantic pre-existing project in an unfamiliar programming language pretty fast and yet I have to correct and babysit it all the time by intuition. Did it save me time? Probably, but I'm not 100% sure!
The paradoxicality is in that there's probably no going back from AI where it already kind of works for us individually or at org levels, but most of us don't seem to be fully satisfied with it.
The article here pretty much confirms the paradox of AI: yes, orgs implement it, can't go back from it and yet can't reduce the headcount either.
My prediction at the moment is that AI is indeed a bubble but we will probably go through a series of micro-bursts instead of one gigantic burst. AI is here to stay almost like a drug that we will be willing to pay for without seeing clear quantifiable benefits.
It’s a result of the lack of rigor in how it’s being used. Machine learning has been useful for years despite less than 100% accuracy, and the way you trust it is through measurement. Most people using or developing with AI today have punted on that because it’s hard or time consuming. Even people who hold titles of machine learning engineer seem to have forgotten.
We will eventually reach a point where people are teaching each other how to perform evaluation. And then we’ll probably realize that it was being avoided because it’s expense to even get to the point where you can take a measurement and perhaps you didn’t want to know the answer.
A hammer doesn't always work as desired, it depends on your skills plus some random failures. When it works however, you can see the result and are satisfied with it - congratulations, you saved some time by not using a rock for the same task.
> Software vendors keen to monetize AI should tread cautiously, since they risk inflating costs for their customers without delivering any promised benefits such as reducing employee head count.
... Wait, why would the _vendor_ care about that? It's the customers who should be cautious; unscrupulous vendors will absolutely sell them useless snake oil with no qualms, if they're willing to buy it.
> These leaders are increasingly making budget trade-offs between head count investment and AI deployment, and expect vendors to engage them on value and outcomes, not just features.
The cheek of them! Actually demanding that the product be useful!
Our ability to measure management productivity in general is basically nonexistent. It's an area of academic study and AFAIK the state-of-the-art remains not much better than [shrug emoji].
Remember that when they're wrecking your productivity by trying to twist your job into something they can measure in a spreadsheet.
>Consultant says software vendors risk hiking prices without cutting costs or boosting productivity
From what I know of the firm, it looks like clients have come to the right place if they want a consultant with great experience at hiking prices without cutting costs or boosting productivity.
They're probably salty that the only jobs of theirs they can figure out how to automate away with AI are the too-cheap-to-bother-automating Indian workers who author their PowerPoint decks.
> Software vendors keen to monetize AI should tread cautiously, since they risk inflating costs for their customers without delivering any promised benefits such as reducing employee head count.
That's easy. Reduce the headcount first, and then let the remaining team of poor and desperate, I mean, elite engineers and support teams <buzzword for use> AI for <more buzzwords for make dollars go up> /s.
When will boards replace executive leadership with AI? If Return to Office taught us anything, it was that we already need a couple, and the rest of them copy and paste. Well, AI can do that! Also /s, but maybe just 50%.
The rank and file generally have a really good grasp on their subset of the domain -- they have expertise and experience, as well as local context. Small teams, their managers -- those are the ones who actually perform, and deliver value.
As you move up the hierarchy, access to information does not scale. People in the middle are generally mediocre performers, buried in process, ritual and politic. In addition to these burdens, the information systems do their best to obscure knowledge, with the usual excuses of Safe and Secure (tm) -- things are siloed, search does not work, archives are sunsetted, etc.
In some orgs tribalism also plays an outsized role, with teams acting competitive, which largely results in wasted resources and seven versions of the same failed attempt at New Shiny Thing.
Then as we look higher yet in the hierarchy, the so-called decision makers don't really do anything that cannot be described as "maximize profit" or "cut costs", all while fighting not to get pulled down by the Lord of the Flies shenanigans of their underlings. They are the most replaceable.
A successful "AI Transformation" would come in top-down, going after the most expensive headcount first. Only truly valuable contributors would remain at that level. Organizational knowledge bases would allow to search, analyze and reason about the institutional knowledge accrued in corporate archives over the years, enabling much more effective decision making. Meanwhile, the ICs would benefit from the AI boost, outsourcing some menial tasks to the machine, with the dual benefit of levelling up their roles, and feeding the machine more context about the lower-level work done across the org.
But particularly we're always dealing with IT security "experts" running dumb checklists taking things away and breaking everything and never bothering figure out how we're supposed to use computers to actually get any work done ("hmmm. we didn't think about that... we'll get back to you" is a common response from these certified goons). Apparently the security gods have decided we can't have department file servers anymore because backups are too difficult to protect against ransomware or something so we're all distracted with that pronouncement from the mountain trying to figure out how to get anything done at the moment.
When I look at top-level decision-makers at my Mag-7 employer, they are smart people. Many of them were go-getters in their earlier career, responsible for driving some very successful initiatives, and that's why they're at the top of the company. And they're very intentional about team structure: being close enough to senior directors and VPs to see some of their thinking, I can tell that they understand exactly who the competent people are, who gets things done, who likes to work on what, and then they put those people at the bottom of the hierarchy with incompetent risk-averse people above them. Then they'll pull them out and have them report directly to a senior person when there's a strategic initiative that needs doing, complete it, and then re-org them back under a middle-manager that ensures nothing gets done.
I think the reason for this is that if you have a wildly successful company, the last thing you want to do is screw it up. You're on top of the world, money is raking in from your monopoly - and you're in zugzwang. Your best move is not to play, because any substantive shift in your product or marketplace risks moving you to a position where you aren't so advantaged. So CEOs of successful companies have a job to do, and that job is to ensure that nothing happens. But people's natural inclination is to do things, and if they aren't doing things inside your company they will probably be doing things outside your company that risk toppling it. So you put one section of the company to work digging holes, and put the other section to work filling them in, and now everybody is happy and productive and yet there's no net external change to your company's position.
Why even have employees then? Why not just milk your monopoly, keep the team lean, and let everybody involved have a big share of the profits? Some companies do actually function like this, eg. Nintendo and Valve famously run with fairly small employee counts and just milk their profits, some HFT trading shops like RennTech just give huge employee dividends and milk their position.
But the problem is largely politics. For one, owning a monopoly invites scrutiny; there are a lot of things that are illegal, and if you're not very careful, you can end up on the wrong side of them. Two, owning an incredibly lucrative business makes you a target for competition, and for rule-changes or political action that affect your incredibly lucrative business. Perhaps that's why examples of highly-profitable businesses that stay small often involve staying secret (eg. HFT) or being in an industry that everybody else dismisses as inconsequential (eg. gaming or dating).
By having the huge org that does nothing, the CEO can say "Look, I provide jobs. We're not a monopoly because we have an unfair advantage, we compete fairly and just have a lot of people working very hard." And they can devote a bunch of people to that legal compliance and PR to make sure they stay on the right side of the government, and it also gives them the optionality to pull all those talented people out and unmuzzle them when there actually is a competitive threat.
So we're seeing this play out. There are two factors that exist in tension here:
- The valuation of many of these companies depend on the perception that they are The Future. Part of that is heavy R&D spending and the reputation that they hire The Best. Even if the company mostly just wants to sit and milk its market position, keeping the stock price afloat requires looking like they're also innovative and forging the future.
- Some companies are embracing the milk-it-for-all-its-worth life stage of their company. You see this in some of the Mag-7 where compensation targets are scaling down, explicit and implicit layoffs, etc. This gear-shifting takes time but IMO is in fact happening.
The tightrope they're all trying to walk is how to do the latter without risking their reputation as the former, because the mythos that they are the engines of future growth is what keeps the stock price ticking.
i work at a large music streamer and this perfectly describes my workplace. when i was outside i never understood why that company needs thousands and thousands of ppl to run what looks like stagnant product that hasn't changed much in years.
https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...
> A successful "AI Transformation" would come in top-down, going after the most expensive headcount first.
This isn't a mistake. McKinsey consultants and their executives at their clients are a part of the same clique. You don't get into either without going to the right schools, being in the right fraternities, and knowing the right people. "Maximize profit" and "cut costs" are to be read as "keep the most money for ourselves in the form of earnings per share and dividends" and "pay fewer people". And since you can convert shares to money by gutting companies, there's no real incentive to remain competitive in the greater marketplace.
Do you still need an "AI Transformation" then? Sounds like just axe the CEO or cut their enormous salary = profit?
I have to admit, the results they demonstrated — which we validated using our own data — were impressive.
The challenge, however, is that outcome-based contracts are hard for companies to manage, since they still need to plan and budget for potential costs upfront.
So even when you have measurable benefits - it's still not so easy either.
EDIT:
To clarify the issue — companies are used to budgeting for initiatives with fixed costs. But in an outcome-based contract, the cost is variable.
As a result, finance teams struggle to plan or allocate budgets because the final amount could range widely — for example, $200K, $2M, or even $20M — depending on the results achieved.
Additionally, you almost then need a partial FTE just to manage these contracts to ensure you don't overpay because the results are wrongly measured, etc.
None of these challenges are insurmountable, but it's also not easy for companies either.
You model it as a fixed %, variable cost and run revenue sensitivities. It either meets your investment criteria or doesn't.
If the company doesn't have the resources available to execute something they've validated, then that's a funding issue that can be solved.
Either way, McK's structure doesn't make it "hard for a company to manage." The investment committee approves or rejects.
Same reason they ask for "estimates" which they then later try to hold accountable as "quotes" when it suits them. Same reason I 3x my initial estimates.
I'd be surprised if they'd do that for GenAI projects, maybe only for really good clients that pay them 50mln+ a year anyway
Either
- the execs are leaving a laughably easy 20m on the table McKinsey knew they'd make (how did they know, and why didn't we)
- they're dealing with insider information - especially dangerous if McKinsey is changing dependencies around.
- they're doing some creative accounting
I'm surprised McKinsey convinced someone to say the quiet part out loud
- AI companies of course will try and sell you that you can reduce headcount with AI
- CEO's will parrot this talking point without ever talking a closer look.
- Everyone lower down on the org chart minus the engineers are wondering why the change hasn't started yet.
- Meanwhile engineers are ripping their hair out cause they know that AI in it's current state will likely not replace any workers.
Pretty soon we will have articles like "That time that CEO's thought that AI could replace workers".
No one wants to say on their resume, "I manage 5 people, but trust me, with AI, its like managing 20 people!"
Managers also don't pay people's salaries. The Tech Tools budget is a different budget than People salaries.
Also keep in mind, for any problem space, there is an unlimited number of things to do. 20 people working 20% more efficiently wont reach infinity any faster than 10 people.
In my previous company, we would speculate about where to use AI and we were never sure.
In the new company we use AI for everything and produce more with substantially fewer people
I don't mean to be dismissive and crappy right out of the gate with that question, I'm merely drawing on my experience with AI and the broader trends I see emerging: AI is leveraged when you need knowledge products for the sake of having products, not when they're particularly for something. I've noticed a very strange phenomenon where middle managers will generate long, meandering report emails to communicate what is, frankly, not complicated or terribly deep information, and send them to other people, who then paradoxically use AI to summarize those emails, likely into something quite similar to what was prompted to be generated in the first place.
I've also noticed it being leveraged heavily in spaces where a product existing, like a news release, article, social media post, etc. is in itself the point, and the quality of it is a highly secondary notion.
This has led me to conclude that AI is best leveraged in such cases where nobody including the creator of a given thing really... cares much what the thing is, if it's good, or does it's job well? It exists because it should exist and it's existence performs the function far more than anything to do with the actual thing that exists.
And in my organization at least, our "cultural opinion" on such things would be... well if nobody cares what it says, and nobody is actually reading it... then why the hell are we generating it and then summarizing it? Just skip the whole damn thing and send a short, list email of what needs communicating and be done.
He's either lying or hard-selling. The company in his profile "neofactory.ai" says they "will build our first production line in Dallas, TX in Q3." well, we just entered Q4, so not that. Despite that it has no mentions online and the website is just a "contact us" form.
Ding ding ding!
AI can absolutely reduce headcount. It already could 2 years ago, when we were just getting started. At the time I worked at a company that did just that, succesfully automating away thousands of jobs which couldn't pre-LLMs. The reason it ""worked"" was because it was outsourced headcount, so there was very limited political incentive to keep them if they were replaceable.
The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it. This is by far the biggest cause of all these "failed" AI projects. It's super obvious when you start noticing that for jobs that were being outsourced, or done by temp/contracted workers, those are much more rapidly being replaced. As well as the fact that tech startups are hiring much less than before. Not talking about YC-and-co startups here, those are global exceptions indeed affected a lot by ZIRP and what not. I'm talking about the 99.9% of startups that don't get big VC funds.
A lot of the narrative on HN that it isn't happening and AI is all a scam is IMO out of reasonable fear.
If you're still not convinced, think about it this way. Before LLMs were a thing, if I asked you what the success rate of software projects at non-tech companies was, what would you have said? 90% failure rate? To my knowledge, the numbers are indeed close. And what's the biggest reason? Almost never "this problem cannot be technically solved". You'd probably name other, more common reasons.
Why would this be any different for AI? Why would those same reasons suddenly disappear? They don't. All the politics, all the enterprise salesmen, the lack of understanding of actual needs, the personal KPIs to hit - they're all still there. And the politics are even worse than with trad. enterprise software now that the premise of headcount reduction looms larger than ever.
Trucks in the oil sands can already operate autonomously in controlled mining sites, but wide adoption is happening slowly, waiting for driver turnover and equipment replacement cycles.
I don't know, most of the companies doing regular layoffs wheneveer they can get away with it are pretty big and old. Be it in tech - IBM/Meta/Google/Microsoft, or in physical things - car manufacturers, shipyards, etc.
The execs aren't the ones directly choosing, overseeing and implementing these AI efforts - or in the preceding decades, the software efforts. 9 out of 10 times, they know very little about the details. They may ""spearhead"" it in so far that's possible, but there's tonnes of layers inbetween with their own incentives which are required to cooperate to actually make it work.
If the execs say "Whole office full-time RTO from next month 5 days a week", they really don't depend on those layers at all, as it's suicide for anyone to just ignore it or even fake it.
That's what I've wondered. We don't just run out of work, products, features, etc. We can just build more but so can the competition right?
I am not kidding. In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki. The current AIs are perfect for this.
It's hard to take this sentiment seriously from a source that doesn't have direct experience with the c-suite. The average person only gets to see the "public relations" view of the c-suite (mostly the CEO) so I can certainly see why a "LLM based mouthpiece" might be better.
The c-suite is involved in thousands of decisions that 90% of the rest of the world is not privy to.
FWIW - As a consumer, I'm highly critical of the robotic-like external personas the c-suite take on so I can appreciate the sentiment, but it's simply not rooted in any real experience.
That is exactly what it can't do. We need someone to hold liable in key decisions.
This is a puzzling assertion to me. Hasn’t even the cheapest Copilot subscription arguably replaced most of the headcount that we used to have of junior new-grad developers? And the Zendesks of the world have been selling AI products for years now that reduce L1 support headcount, and quite effectively too since the main job of L1 support is/was shooting people links to FAQs or KB articles or asking them to try restarting their computer.
Yup, it's just the latest management fad. Remember Six Sigma? Or Agile (in its full-blown cultish form; some aspects can be mildly useful)? Or matrix management? Business leaders, as a class, seem almost uniquely susceptible to fads. There is always _some_ magic which is going to radically increase productivity, if everyone just believes hard enough.
But managers will not obsolete themselves.
So right now AI should be used to monitor and analyze the workforce and find the efficiency that can be achieved with AI.
I mean, nah, we've seen enough to these cycles to know exactly how this will end.. with a sigh and a whimper and the Next Big Thing taking the spotlight. After all, where are all the articles about how "that time that CEOs thought blockchain could replace databases" etc?
Each human can be a bit more productive, I fully believe 10-15% is possible with today's tools if we do it right. But each human has it unique set of experience and knowledge. If I do my job a bit faster, and you do your job a bit faster. But if we are a team of 10, and we do all our job 10% faster, doesn't mean you can let one of us go. It just means, we all do our job 10% faster, which we probably waste by drinking more coffee or taking longer lunch breaks
The quiet part out loud phrase is overused.
If you ask me, that's the real long game on AI. That is exactly why all these billionaires keep pouring money in. They know it's the only way to continue growth is to start taking over large sections of the economy.
They found a hack and that is loading up the intangible column. In that list reputation/brand risk always makes an appearance. "If you don't do this project this terrible thing might happen and we might suffer reputational risk. We estimate 2x millions of loss due to reputation being harmed". And presto! there is a case for the project.
Its like cloud migration project. Total run costs are part of the pitch but you add on extra things like added security, automatic updates etc. it becomes an easier sell.
AI tools being so hyper focused on "productivity gains" it is going to be tough sell. Especially because users will resist it and the productivity boosts if any will remain low.
Now maybe Notion customers love all these AI features but it was super weird to see that stuff so prominently given my understanding of what the company was all about.
Is your product a search engine? It's AI now. [1][2]
Is it a cache? Actually, it's AI. [3]
A load balancer? Believe it or not, AI. [4]
[1] https://www.elastic.co/
[2] https://vespa.ai/
[3] https://redis.io/
[4] https://www.f5.com/
(And I develop "AI" tools at my day job right now...)
But I'm guessing their growth was linear, and hard fought, after initial success over tools like Atlassian's which are annoying and expensive.
So to get back to hypergrowth, they had to stuff AI in every nook and cranny.
Very successful in my domain on a very successful project.
I wrote an insane amount of code but more importantly I wrote libraries across multiple languages that prevented an insane amount of code from being written.
We would have literally 1k people during quarterly planning and did distributed agile and all this org stuff
(It was interesting anthropologically to me because I operated outside the game, I was just waiting for a ~non-compete I had signed for a profitable technical co-founder exit to end to jump back into starting a new company in the same space.)
And the whole thing worked, and I was very high profile on the project as probably the highest paid IC and the company hired me away from the agency and I worked their until starting my company.
There are 3 layers, the deal makers, the coordinators and the implementers.
You cannot easily automate out the deal makers because they are trust, legal/contracting and power (they use the resources of the firm to allocate: people, resources, etc.) loci. Someone has to hang if stuff goes wrong and someone has to deal with executive petulance and fragile egos.
Now lets look at the middle layer and implementers.
Let's assume for a minute we are looking at a big project where the existing company has hamstrung itself with silos and infighting and low productivity teams, this is just framing to understand the next part, it can cut either way.
The middle layer in consulting is big because other companies have big middle layers as well, and basically what is happening is tribal warfare, you need bodies and voices and change management teams to propagate what is happening otherwise the existing group will slow play the leadership and the project never gets done. If the middle layer is 1 to 1 every native sees they can be replaced. Many big 4 allow poaching for this very reason. The threat of non compliance and then also giving an easy congenial out for people who are ready to exit the consultant lifestyle.
The implementation layer is then able to be done, and it's done by juniors mostly because juniors don't have to be politically savvy, they can work to task.
Just a small slice or things I realized while consulting.
"Only" 30%. Interesting framing.
> The firm’s earlier research suggested that 2027 would be the first year when AI technology would be able to match the typical human’s performance in tasks that involve “natural-language understanding.” Now, McKinsey reckons it will happen this year."
> "Generative AI will give humans a new “superpower”, and the economy a much-needed productivity injection, said Lareina Yee, a senior partner at the firm and chair of McKinsey Technology, in the report.
- https://archive.is/mhYIn
https://www.mckinsey.com/industries/technology-media-and-tel...
All of this said, using AI in your back end takes a huge amount of time from your users and employees. You have to vary multiple prompts, you have to make the output sane, touch it up, etc. The most useful part of AI for me has been using it to learn something new, or push through a task that I otherwise couldn't do. I was able to partially rewrite a logging window to reduce CPU use significantly. It took me over two weeks of back and forth with AI to figure out a workable solution and implement it into the software. I competent programmer probably could have done it better than I did in less than an hour. There's no business benefit to a help desk person being able to spend 2 weeks writing code that an engineer would be much better suited to handling. But maybe that engineer could write it in 10 minutes instead of an hour if they used AI to understand the software first.
Likely, no. In my industry, I see a fraction of ICs using it well, a fraction of leadership using it for absolute dog shit idea generation, and the remainder using it to make their jobs easier in the short run, while incurring debt in the long run since nobody is "learning" from AI summaries and most people don't seem to be reading the generated "AI notes" sent in emails.
By and large, I think AI is going to hurt my workplace based on the current trajectory, but it won't be realized until we are in a hard hole to dig out of.
What I’d be more interested in is an AI paralegal that works for me, not for the signing tool or the counterparty, where I control its prompt so I can try to focus it on possible ways I can be screwed with this contract, and what recourses I will or won’t have.
All this is pretty textbook setup for how this bubble finally implodes as companies fail to deliver on their AI investments and come under fire from shareholders for spending a ton with little return to show for it.
Additionally once the AI vendors have locked these companies to their ecosystem, the enshittification will start and the companies who reduced their headcount to bare minimum will start to see why that was a really, really bad idea.
- The moment AI is actually good enough to replace us, it will also be incredibly easy to create new software/apps/whatever. There could/would be a billion solo dev SAAS companies eating the lunch of every traditional tech org.
- People (Executives) seem to underestimate just how much of the work is iterating and refining a product over a long time. Getting an LLM good enough to complete a Jira task is missing the point.
- IMO LLM's are also completely draining the motivation of workers. A lot of software devs are intrinsically motivated by solving the problem. If your role is being watered down to "prompt the chat bot and baby sit what comes out", the motivation disappears. This also absolutely destroys any of the creativity/discovery that comes out of solving the task hands-on.
I used to love coding, and did it a ton. Then it became less and less part of my job, and I started hating coding. It was so frustrating when I knew exactly what needed to be done in the code, but had to spend the time doing low value stuff like typing syntax, tracing through the code to find the right file to edit, etc when I'm already strapped for time.
LLMs and agentic coding tools have allowed me to not spend time on the low-value tasks of typing, but instead on the high-value tasks of solving problems like you mentioned. Just interesting the different perspectives we have.
I think both of the viewpoints are valid depending on where you're at in your career.
We can imagine a junior developer who isn't quite bored with those low-value tasks just yet.
As you grow more senior/experienced, the novel problems become harder to find - and those are the one's you want to work on. AI can certainly help you cut through the chaff so you have more time to focus on those.
But trends are trends and AI is increasingly getting better at solving the novel/interesting problems that I think you're referring to.
Everyone's different and I know there are folks who are excited to not have to write a single line of code. I'd wager that's not actually most engineers/developers though.
People still garden by hand because it's innately satisfying.
I think the general point here is true, but it's also brilliant framing from a company selling consulting services.
> Price levels: How should vendors set price levels when the cost of inferencing is dropping rapidly? How should they balance value capture with scaling adoption?
This is written for B2B target clients as if it's pulling back the veil on pricing strategy and negotiating. Hire McKinsey to get you the BEST™ deal in town.
These things are clearly useful once you know where they excel and where they will likely complicate things for you. And even then, there's a lot of trial and error involved and that's due to the non-deterministic nature of these systems.
On the one hand it's impressive that I can spawn a task in Claude's app "what are my options for a flight from X to Y [+ a bunch of additional requirements]" while doing groceries, then receive a pretty good answer.
Isn't it magic? (if you forget about the necessity of adding "keep it short" all the time). Pretty much a personal assistant without the ability of performing actions on my behalf, like booking tickets - a bit too early for that.
Then there's coding. My Copilot has helped me dive into a gigantic pre-existing project in an unfamiliar programming language pretty fast and yet I have to correct and babysit it all the time by intuition. Did it save me time? Probably, but I'm not 100% sure!
The paradoxicality is in that there's probably no going back from AI where it already kind of works for us individually or at org levels, but most of us don't seem to be fully satisfied with it.
The article here pretty much confirms the paradox of AI: yes, orgs implement it, can't go back from it and yet can't reduce the headcount either.
My prediction at the moment is that AI is indeed a bubble but we will probably go through a series of micro-bursts instead of one gigantic burst. AI is here to stay almost like a drug that we will be willing to pay for without seeing clear quantifiable benefits.
We will eventually reach a point where people are teaching each other how to perform evaluation. And then we’ll probably realize that it was being avoided because it’s expense to even get to the point where you can take a measurement and perhaps you didn’t want to know the answer.
With AI you have a thing you can't quite trust under any circumstance even if it's pretty good at everything.
And I did not speak out
Because I was not an artist
It doesn't matter. People are convinced it's a miracle technology, so I'm just a backwards luddite resisting progress
... Wait, why would the _vendor_ care about that? It's the customers who should be cautious; unscrupulous vendors will absolutely sell them useless snake oil with no qualms, if they're willing to buy it.
> These leaders are increasingly making budget trade-offs between head count investment and AI deployment, and expect vendors to engage them on value and outcomes, not just features.
The cheek of them! Actually demanding that the product be useful!
Remember that when they're wrecking your productivity by trying to twist your job into something they can measure in a spreadsheet.
From what I know of the firm, it looks like clients have come to the right place if they want a consultant with great experience at hiking prices without cutting costs or boosting productivity.
That's easy. Reduce the headcount first, and then let the remaining team of poor and desperate, I mean, elite engineers and support teams <buzzword for use> AI for <more buzzwords for make dollars go up> /s.
When will boards replace executive leadership with AI? If Return to Office taught us anything, it was that we already need a couple, and the rest of them copy and paste. Well, AI can do that! Also /s, but maybe just 50%.