I run a small open source LLM inference company, Synthetic.new. As far as I can tell, CNBC isn't reporting this accurately: the problem isn't that Oracle is building "yesterday's data centers": they're building Blackwell DCs! Those are today's DCs.
The problem appears to be that Oracle is building today's DCs... Tomorrow. And by the time they come online, Vera Rubins will be out, with 5x efficiency gains. And Oracle is unlikely to want to drop the price of Blackwells 5x, despite them being 5x less efficient.
It's a little unclear to me how bad this is. Nvidia's "rack scale" machines like GB200-NVL72s and GB300-NVL72s are basically a fully built rack you roll into a DC and plug into power and network. In that case, Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs. Tada! Tomorrow's DCs, tomorrow.
OTOH it's possible someone at Oracle screwed up and committed to buying Blackwells at today's prices, delivered tomorrow. Or maybe construction of the physical DCs is behind schedule, so today's Blackwells are sitting around unused, waiting for power and networking tomorrow. Then they're in a bit of trouble.
Regardless, CNBC's reporting seems pretty unclear on what actually happened and whether this is actually bad or not.
they are saying what you are saying. At least Deirde Bosa did. I think there is a lot of folks internally who don't understand the gravity of it and keep questioning it.
You are right about the building of today's DC's. There is a small part of me that feels Oracle might be a bit toxic long term with all this debt him and his kid have taken on. And this could be the first reaction to it.
5x improvement of energy efficiency in just GPUs translates to more like 50% reduction of power usage, with is significant but doesn't warrant a 80% reduction in pricing. Especially since Nvidia will charge more for the same card - they have been pricing things pretty aggressively.
And on the DC side they will be building to a power and heat budget. If Vera Rubin changes the power density per rack equation that may have some impact. But thinking rationally if the flops per kw-sq ft are lower than Blackwell, no problem. If they are a lot better then even if the kw per sq ft is higher you can just space the racks out a little
Google, Meta, etc don't have to wait 12 or 24 months for their big data center to open. They already have lots of DCs to cram all the NVidia cards into, right now.
All DCs are big concrete rooms that can supply so much power per sq area and remove so much heat per sq area (the two related of course since the heat comes from dissipating the power). Variation is just in density of whatever sort of fancy resistor you plan to put in the concrete room.
This feels a bit overdone. OpenAI has had problems with every compute partner they've ever had. It's just not a solvable problem, who would they go to to allegedly get next-gen chips quicker?
> I hope the lawnmower goes bankrupt with this and the hostile WB take over.
Unfortunately there is no chance of that happening.
At his level of personal wealth there is no realistic scenario that leads to personal bankruptcy. In our current capitalist society once you're into the billions you're "too big to fail" and you have unlocked the infinite money glitch.
The only consolation is the lawnmower is 81 and thus is going to be dead soon (even the mega-wealthy can't plastic surgery themselves out of this outcome, at least not yet) and he can't take any of it with him. But all indications point to his progeny having aspirations to be even more damaging to society than he has been.
Piketty’s central argument is that when the rate of return on capital (r) exceeds the rate of economic growth (g), wealth concentrates over time into fewer and fewer hands. This is his now-famous r > g inequality.
The implication is that capitalism, left to its own devices, doesn’t naturally spread wealth around. It does the opposite. The relatively egalitarian period of the mid-20th century (roughly 1930s-1970s) was the historical exception, driven by two world wars, the Great Depression, and deliberate policy choices like progressive taxation. The longer historical pattern, which Piketty traces with extensive data going back to the 18th century, is one of increasing concentration.
His practical prescription is a global progressive tax on wealth (not just income) to counteract this tendency. He acknowledges this is politically difficult but argues it’s the most straightforward mechanism to prevent a return to the kind of patrimonial capitalism that defined the Gilded Age and the Belle Époque, where inherited wealth dominated and social mobility was minimal.
The book’s real contribution was less the theoretical claim (which economists had gestured at before) and more the empirical work. Piketty and his collaborators assembled an unprecedented dataset on wealth and income distribution across multiple countries and centuries, which gave the argument a weight that prior discussions lacked.
>The book’s real contribution was less the theoretical claim (which economists had gestured at before) and more the empirical work.
Empirical work... like conveniently ignoring the fact that there's far less old money billionaires than we'd expect?
>For these lucky people, the experience of the Vanderbilts and their contemporaries offers a cautionary tale. At the turn of the 20th century, America’s census recorded about 4,000 millionaires, note Victor Haghani and James White, two wealth managers, in their book, “The Missing Billionaires”. Suppose a quarter of them had at least $5m (the richest had hundreds) and had invested it in America’s stockmarket. Had they then procreated at the average rate, paid their taxes and spent 2% of their capital each year, their descendants today would include nearly 16,000 old-money billionaires. In reality, it is a struggle to find a single one who traces their fortune back to the first Gilded Age.
>With J6, in the matter of 2 or so years the FBI has secured over 1000 convictions.
Again, large numbers, but no context. How many people did you think were at the riots? 10k? 50k?
Moreover, Jan 6th was an event that definitely happened. The same can't be said for whatever happened at Epstein's island. The island exists, Epstein's a convicted sex offender, and people flew there, but associating with sex offenders isn't a crime, no matter how despicable it might seem.
J6 is not a strong counterexample, IMHO. Part of the problem with Epstein is "proof beyond a reasonable doubt," for which evidence is needed--and, it appears, hard to come by. Whereas with J6, there were thousands of hours of footage showing the crimes being committed (and in many cases bragged about), which made prosecutions much easier.
Please provide a list of all multi-billionaires who have somehow managed to lose any significant portion of their wealth outside of a divorce combined with bad marriage planning. And even in those rare cases, they don't approach bankruptcy.
It isn't that they get bailed out by the government (like the banks in 2008), it is that at the scale of their wealth there is no realistic way to lose it fast enough to make any significant negative difference when the neutral state of wealth at that scale is to snowball ever larger (mostly because we refuse to tax it appropriately).
> At his level of personal wealth there is no realistic scenario that leads to personal bankruptcy. In our current capitalist society once you're into the billions you're "too big to fail" and you have unlocked the infinite money glitch.
This is plainly false. There are plenty of example, even recently, of billionaires losing their fortunes or going bankrupt. Often they come with criminal prosecution because they get desperate and try illegal ways to hang on to their wealth. Sam Bankman-Fried, Elizabeth Holmes, and several other examples come to mind.
There are a lot of stories of billionaires getting too risky with their investments or too concentrated in businesses and losing the majority of their wealth. The Barclay story, Jim Justice, the old Peloton CEO.
It’s not a common outcome because you have to try hard to screw up that badly when you have over a billion dollars in wealth. Parking it anywhere in common investments would leave you and your ancestors set forever.
> Sam Bankman-Fried, Elizabeth Holmes, and several other examples come to mind.
Billionaires that were dumb enough to attempt to screw even bigger billionaires. Sure you can find exceptions to the rules, but Ellison isn't going to be one of those.
I don't believe that Stargate is "yesterday's data center". It's being built in multiple phases and Oracle has access to Nvidia's roadmap. They know 200 kW/rack is coming. The newer phases could easily be built out to support Rubin and Feynman.
With respect to consumption, it’s pretty efficient vs older traditional servers, though I know workloads like that aren’t completely fungible. Nonetheless it bears keeping in mind that a single GB200 NVL72 rack provides 1.4 ExaFLOPS of AI compute (at FP4 precision, ideal circumstances, but this is envelope math all around). So it’s power efficient, for what it is.
Oh, I have no doubt it is functionally efficient. I'm just amazed given the system deployments I've been party to, and the tiny amount of per rack energy usage comparatively speaking given the functionality of those systems.
Like, what in the good god damn are we using all this energy for?
You left out overthrowing governments with customized targeted propaganda, jamming citizen discussion with noise, artificially creating and nourishing contrarian cells in democratic societies. The machines will now be programming people.
In theory the water stays clean and can be reused. But I assume these cheapskates will go for evaporative cooling everytime? Then yeah, we need laws against that.
Some of the reason for the high density is that you need devices physically close to each other to share such bandwidth. It’s not because we’re limited by the physical building space, because we can construct buildings all day long. Sending bits around at ultra high speed is hard and you need to keep all of the devices physically close to avoid having your interconnect costs explode.
Interestingly the realm in which I have domain experience has similar constraints, but based primarily on physical transport latency and less on bandwidth. There has been a move in some spaces towards hyper-dense deployments, but it’s a very small amount of the total compute capacity due to other limitations.
Still, the world I’m used to operating in is typically 5-10 kVA/rack.
So what's the theory that goes with this about why cnbc are reporting that openai are walking because they want newer nvidia hardware? CNBC are clueleess? People at openai are lying to cnbc? cnbc are fabricating stories while drunk?
There has to be some theory to explain the story to be consistent with this comment.
I agree with you more than I agree with the parent comment.
To use the hit HBO TV show silicon valley analogy, it is far more likely that "the bear is sticky with honey" will happen at Oracle than at Open AI. Some kind of game of telephone gone wrong at some point and now the people responsible at Oracle must double down in order to kick the can to the next quarter and not appear clueless.
Statutory disclaimer: I am not affiliated with either Open AI or Oracle and have no insider information. All of this is mere conjecture and has no basis in reality.
Plenty of enterprise server hardware (racks, servers, RAM, disks) does have an active secondhand market after 3-5 years of use, but I think GPUs are too specialized for it to be viable. I doubt anyone has the setup to run a H200 in their home rig.
I also don't think companies are going to have mandatory replacement cycles for GPU hardware the same way they do for everything else, because:
1. It is an order or magnitude (or more) more expensive.
2. It isn't clear whether Moore's law will apply to the AI GPU space the same way it has for everything else.
Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
> Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
That's exactly the point.
Performance/watt is increasing so much gen-to-gen that it makes no longer sense to run older hardware.
you can absolutely run e.g. datacenter-level A100 at home, there are adapters from the SXM to the PCIe socket. Haven't seen people running SXM versions of H100s this way but this could be due to the price factor only
Well by the time the become obsolete you can run that computing on a Mac with no special cooling so I really doubt they will be of any use. Maybe in some parts of the world where electricity is cheap. If someone wants to really find out perhaps watching the crypto ASICs stories could help.
Well technically true, I would wager that the home lab is going to require increasingly distinct and unusual adaptations to retrofit the hardware to home use.
New stuff is all liquid cooled by default and that's a paradigm shift for your average home lab.
I'm less aware of exactly what's happening on the power side of things but I think some of the architectures are now moving to relatively high voltage DC throughout and then down converting it to low voltage right before it's used. So not exactly just plug-and-play with your average nema15 outlet.
> I doubt anyone has the setup to run a H200 in their home rig.
There are PCIe versions of these right? And another comment is saying there are PCI adapters too. It "only" requires 600 to 700W. It's not out of reach for everybody.
If the used regular server market is any indication, you can find, after a few years, a lot of enterprise gear at totally discounted prices. CPU costing $4K brand new for $100 after a few years: stuff like that.
A friend has got a 42U rack and so do some homelab'ers. People have been running GPU farms mining cryptocurrencies or doing "transcoding" (for money).
It's not just CPUs at 1/40th of their brand new price: network gear too. And ECC RAM (before the recent RAM craze).
I'm pretty sure that if H200 begin to flood the used market, people shall quickly adapt.
> Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
I agree with that. But if they resell old H200s, people are resourceful and shall find a way to run these.
Would it even require a particularly high level of resourcefulness? Purchase the GPU along with the mobo that slots it. It's not as though companies typically swap out CPU and GPU while keeping the rest of the box.
They should max out a bit below 6 kW? The H100 SXM5 is 700 w which would place the system at 5.6 kW plus change. Too much for a standard circuit but well within the bounds of a residential appliance.
It's a monolithic 8U rackmount appliance so perhaps a dishwasher would make for a decent size comparison?
Definitely no good if you rent but homeowners should have little to no difficulty. The sort of people interested in such gear usually have multi kW racks already.
Last I checked AWS is still offering g4dn instances that run on NVIDIA T4 GPUs, which were first released in 2018. I think most people underestimate how long superscalers can keep these things running profitably after they depreciate, and you probably don’t want anything they throw away.
My last employer is still running a bunch of otherwise discontinued g3 instances with 2015 era GPUs.
It's likely the GPU boards are designed for water cooled data center racks and might not fit in a regular PC case. It's also possible the PCB the GPU's are mounted to might not be standard PCIe cards that fit into an ATX case.
I bought a used NEC SX Aurora TSUBASA (PCIe x16 board that looks like a GPU board) and realized it has no fans. The server case it is designed to fit into is pressurized by fans forcing air through eight cards on a special 4 + 4 slot motherboard. I have to stack and mount three 40mm fans on the back.
They are build to physically last 5-7 years in 24/7 datacenter use, but they have effective lifetime just 3-4 years, then their value has deprecated and electricity and infrastructure cost dominates. Meta did a benchmark where 9% of the chips failed every year, 'infant mortality' is much higher in the first 3 months of use.
9% is an absurd failure rate for solid state electronics. Particularly considering the profit margins. I assume it's related to the power densities involved. Would you happen to recall the source?
I've written about this elsewhere but I predict there will be a significant secondary market for repurposing parts of datacenter GPUs (for example, RAM chips) by desoldering them and soldering them onto new PCBs that fit PC/consumer use cases.
Depending on the elemental composition, it could definitely be worthwhile to recycle wherever scale is practical. For giant datacenters and companies using hundreds of thousands or millions of gpus, that adds up to a lot of gold and other valuable elements.
In order to take advantage of that, someone needs to be positioned to process all that material economically, and to make the logistics achievable by the big players. If it costs Facebook $10million to store and transport phased out gpus vs just sending them to a landfil, they're not going to do it. If they get $100k for recycling - probably not going to do it. If they pocket $5 million, they will definitely contract that out, especially if it costs $50 million to build out the infrastructure to handle it.
Probably a good company idea - transport, disposal, refurbishment of out of cycle GPUs and datacenter assets, creating a massive recycling pipeline for recapturing all the valuable elements is a pretty good niche.
You send them back to Nvidia or a third party e-waste recycler at end of life. Sometimes they're resold and reused, but my understanding is that most are eventually processed for materials.
I previously ran 150,000 AMD gpus in all conditions at 100% utilization for years. I currently have a multi-million $ cluster of enterprise AMD GPUs.
A couple real world points:
1. They generally don't just fail. More likely a repairable component on a board fails and you can send it out to be repaired.
2. For my current stuff, I have a 3 year pro support contract that can be extended. Anything happens, Dell goes and fixes it. We also haven't had someone in our cage at the DC in over 6 months now.
I think the more interesting question is how much longer does oracle have and at what point does a hostile takeover make sense.
Their databases are heavily used in government, banking and other large industries which have been slower to adapt to change and strugglyto migrate away. At what point does purchasing oracle to gain customer share, existing data centres and the opportunity to migrate to your cloud platform make more sense than competing?
They still have a high market value. However, the debt they will need to service will result in ongoing price increases which will encourage people to migrate away. Over time they will struggle to service the debt and a buyout will be the best of the bad options.
They're one ~ 3 main companies in the US that will sell you quantum computers, and the only one offering a quantum PaaS.
They do a lot of stuff. Also own Hashicorp now, so they have things like: Ansible / RedHat Linux (already owned), Terraform, Consul, Nomad, Packer, etc. A lot of "let's build modern infra" tooling.
Data centres are actually prohibited from using consumer level GPUs via license restrictions. The GPUs they use are largely SXM (server connector) and if you did somehow get one of the PCIe variants (with enormous power and cooling needs) most don't even support gaming APIs.
Yeah, it used to be true that server GPUs at least somewhat resembled their gaming counterparts (i.e. Nvidia Tesla server components from 12+ years ago); they were still PCIe cards, just with server-optimized coolers, and fundamentally shared the same dies that the gaming and professional cards used.
That stopped being true many years ago though, and the divergence has only accelerated with the advent of AI datacenter usage. The form factor is now fundamentally different (SXM instead of PCIe); you can adapt an SXM card to PCIe with some effort [1], but that may not even be worthwhile because 1. the power and cooling requirements for the SXM cards are radically different than a desktop part and more importantly 2. the dies are no longer even close to being the same. IIRC, Blackwell AI chips straight up don't have rasterization hardware onboard at all; internally they look like a moderate number of general SMs attached to a huge number of tensor core. Modern AI GPUs are fundamentally optimized for, well, mat-mults, which is not at all what you want for gaming or really any non-AI application.
This is a pretty damning headline and we are still talking about Blackwell. I guess that is how fast the whole segment is moving but OpenAI and only looking for the most advanced chips feels more like an excuse to walk away from this deal rather than a problem with the stack and oracle. Feels to me that OpenAI is cutting down on commitments and cost as it doesn’t see the revenue pipeline building. May be someone with more knowledge of the reality can comment and correct me
I never thought I would see the day, but my stodgy, lumbering company just banned new Oracle databases. Everyone hates Oracle, and only does business out of necessity. I think more and more companies are trying to extricate themselves from Oracle legal, so Oracle needs a new way to leech onto corporations for the coming decades. AI is the best play in sight.
Did you guys go out to celebrate? It's not too late for Ding Dong, the Bitch is Dead.
If you're Oracle it's not necessary a bad thing if you build an antiquated data center. Isn't much of their customer base legacy customers they are rent-seeking from in perpetuity? Those people are never going to be doing cutting edge AI. They will do what they have always done: adopt new technologies right at the nadir of the Trough of Disillusionment.
It’s a huge gamble but they have no choice but to take it. Most their software will be rendered obsolete by AI (I’ve vibecoded replacements saving millions already, companies everywhere are doing this right now).
So they have to hope they’re a part of the future in the AI capacity because their SaaS business is going to take a big hit.
YTD performance didn’t fully bake this reality in. It was seen as them having 2 huge revenue streams, the market is realizing that AI is a threat to SaaS and baking that into stonks
The actions of oracle lately seem extremely misaligned to maximize stonks - it's extremely political, more than is necessary to merely keep in the good graces of the current administration.
The missing part is that current gpus are already money making machine in 2026 , and you need just to serve that . I’m sure this is a procurement take between nvidia and such a big vendor as oracle
> The missing part is that current gpus are already money making machine in 2026
Are they? Unless you are Nvidia that is very far from the case.
OpenAI's current revenue is $25 billion a year. They are expected to spend $600 billion on infrastructure in the next 4 years to sustain and grow that revenue.
Amazon, Google, Microsoft and Meta are spending a combined $650 billion on infrastructure in 2026 alone.
The story is the same across the rest of the industry.
None of these investments are immediately profitable. And it remains to be seen whether they eventually will be or not.
Anthropic in 2026 only added several billions of revenue. This is insanely fast. In my company llm cost are already eating hiring budgets to a certain extent. We don’t buy gpu. We are paying to those who will.
25 bln from just one company . There will be 6-7 companies like this . And they just scratched the surface . The penetration in many areas is almost 0. Yet.
This is general compute hardware as I understand it. It will not go unused no matter what happens. If new algorithms appear that reduce the number of calculations needed per token for an llm they are probably still good. It's not like silicon advances are accelerating.
If it's built in stages each state will have never variants of hardware I imagine.
What the article did not mention is that oracle founder, executive chairman and biggest stockholder larry ellison is currently bankrolling his kid David's bid to monopolize the entire US news industry so that they are more friendly to Trump, Netanyahu and various other right wing ideologists.
David Ellison is fueling his buying spree with debt guaranteed by his dad's oracle shares. The various assets David has bought are already suffering losses of viewership because viewers are turned off by their new ideological slant.
Usually debt investors are not worried if the stock price is high. Debt has precedence over equity, so if the stock price is riding high, the CEO can always be convinced to print more shares to service the debt. The Oracle stock price has not been doing that hot lately, however. As the article said, it is 50% down. Still ORCL has 430 Billion market cap in comparison with 130 Billion of debt. It seems manageable. But stock prices can move very fast. Ironically, the war in Iran, which David's new news sources keep supporting is causing ORCL stock to go down which can bring down David's new media empire.
David just purchased Warner Bros for about 110. A lot of that (40 billion) is also guaranteed by daddy's ORCL shares. Warner Bros owns Comedy Central, which sadly has been one of Americas most dependable news sources.
The house of cards is still standing but its getting awfully wobbly.
The observatory is named in honour of Vera Rubin. That makes sense. The commercial company deciding to name their new generation of chips does not (at least to me).
The problem appears to be that Oracle is building today's DCs... Tomorrow. And by the time they come online, Vera Rubins will be out, with 5x efficiency gains. And Oracle is unlikely to want to drop the price of Blackwells 5x, despite them being 5x less efficient.
It's a little unclear to me how bad this is. Nvidia's "rack scale" machines like GB200-NVL72s and GB300-NVL72s are basically a fully built rack you roll into a DC and plug into power and network. In that case, Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs. Tada! Tomorrow's DCs, tomorrow.
OTOH it's possible someone at Oracle screwed up and committed to buying Blackwells at today's prices, delivered tomorrow. Or maybe construction of the physical DCs is behind schedule, so today's Blackwells are sitting around unused, waiting for power and networking tomorrow. Then they're in a bit of trouble.
Regardless, CNBC's reporting seems pretty unclear on what actually happened and whether this is actually bad or not.
You are right about the building of today's DC's. There is a small part of me that feels Oracle might be a bit toxic long term with all this debt him and his kid have taken on. And this could be the first reaction to it.
Unfortunately there is no chance of that happening.
At his level of personal wealth there is no realistic scenario that leads to personal bankruptcy. In our current capitalist society once you're into the billions you're "too big to fail" and you have unlocked the infinite money glitch.
The only consolation is the lawnmower is 81 and thus is going to be dead soon (even the mega-wealthy can't plastic surgery themselves out of this outcome, at least not yet) and he can't take any of it with him. But all indications point to his progeny having aspirations to be even more damaging to society than he has been.
Reminder to lay up your treasures in heaven.
That's not how any of this works. "Too big to fail" can be applied to companies, but I don't know of any examples of it being applied to people.
Piketty’s central argument is that when the rate of return on capital (r) exceeds the rate of economic growth (g), wealth concentrates over time into fewer and fewer hands. This is his now-famous r > g inequality.
The implication is that capitalism, left to its own devices, doesn’t naturally spread wealth around. It does the opposite. The relatively egalitarian period of the mid-20th century (roughly 1930s-1970s) was the historical exception, driven by two world wars, the Great Depression, and deliberate policy choices like progressive taxation. The longer historical pattern, which Piketty traces with extensive data going back to the 18th century, is one of increasing concentration.
His practical prescription is a global progressive tax on wealth (not just income) to counteract this tendency. He acknowledges this is politically difficult but argues it’s the most straightforward mechanism to prevent a return to the kind of patrimonial capitalism that defined the Gilded Age and the Belle Époque, where inherited wealth dominated and social mobility was minimal.
The book’s real contribution was less the theoretical claim (which economists had gestured at before) and more the empirical work. Piketty and his collaborators assembled an unprecedented dataset on wealth and income distribution across multiple countries and centuries, which gave the argument a weight that prior discussions lacked.
Empirical work... like conveniently ignoring the fact that there's far less old money billionaires than we'd expect?
>For these lucky people, the experience of the Vanderbilts and their contemporaries offers a cautionary tale. At the turn of the 20th century, America’s census recorded about 4,000 millionaires, note Victor Haghani and James White, two wealth managers, in their book, “The Missing Billionaires”. Suppose a quarter of them had at least $5m (the richest had hundreds) and had invested it in America’s stockmarket. Had they then procreated at the average rate, paid their taxes and spent 2% of their capital each year, their descendants today would include nearly 16,000 old-money billionaires. In reality, it is a struggle to find a single one who traces their fortune back to the first Gilded Age.
https://www.economist.com/finance-and-economics/2025/06/12/h...
So far the only individual that has been meaningfully punished has been Ghislaine Maxwell.
This seems like a prime example of being too big to fail. The FBI puts on kid gloves whenever a rich person is accused of wrong doing.
>So far the only individual that has been meaningfully punished has been Ghislaine Maxwell.
That factoid is meaningless without the rate of prosecutions/convictions for people that FBI "had tabs on".
With J6, in the matter of 2 or so years the FBI has secured over 1000 convictions.
When it wants to, the FBI can move very quickly.
Again, large numbers, but no context. How many people did you think were at the riots? 10k? 50k?
Moreover, Jan 6th was an event that definitely happened. The same can't be said for whatever happened at Epstein's island. The island exists, Epstein's a convicted sex offender, and people flew there, but associating with sex offenders isn't a crime, no matter how despicable it might seem.
It isn't that they get bailed out by the government (like the banks in 2008), it is that at the scale of their wealth there is no realistic way to lose it fast enough to make any significant negative difference when the neutral state of wealth at that scale is to snowball ever larger (mostly because we refuse to tax it appropriately).
This is plainly false. There are plenty of example, even recently, of billionaires losing their fortunes or going bankrupt. Often they come with criminal prosecution because they get desperate and try illegal ways to hang on to their wealth. Sam Bankman-Fried, Elizabeth Holmes, and several other examples come to mind.
There are a lot of stories of billionaires getting too risky with their investments or too concentrated in businesses and losing the majority of their wealth. The Barclay story, Jim Justice, the old Peloton CEO.
It’s not a common outcome because you have to try hard to screw up that badly when you have over a billion dollars in wealth. Parking it anywhere in common investments would leave you and your ancestors set forever.
Billionaires aren't on the same level of wealth as hectobillionaires, just like decamillionaires aren't on the same level of wealth as billionaires.
Billionaires that were dumb enough to attempt to screw even bigger billionaires. Sure you can find exceptions to the rules, but Ellison isn't going to be one of those.
Like, what in the good god damn are we using all this energy for?
Bad AI porn, terrible AI music, AI scams and completely devastating the labor market.
And based on the recent Anthropic/Pentagon rift... I guess also creating autonomous kill-bots and doing mass surveillance.
Just a bunch of super cool stuff.
Some of the reason for the high density is that you need devices physically close to each other to share such bandwidth. It’s not because we’re limited by the physical building space, because we can construct buildings all day long. Sending bits around at ultra high speed is hard and you need to keep all of the devices physically close to avoid having your interconnect costs explode.
Still, the world I’m used to operating in is typically 5-10 kVA/rack.
There has to be some theory to explain the story to be consistent with this comment.
To use the hit HBO TV show silicon valley analogy, it is far more likely that "the bear is sticky with honey" will happen at Oracle than at Open AI. Some kind of game of telephone gone wrong at some point and now the people responsible at Oracle must double down in order to kick the can to the next quarter and not appear clueless.
Statutory disclaimer: I am not affiliated with either Open AI or Oracle and have no insider information. All of this is mere conjecture and has no basis in reality.
Don't forget the possibility that it's AI slop.
That sounds about right.
> People at openai are lying to cnbc?
Remove "to cnbc" and that's a yes.
> cnbc are fabricating stories while drunk?
Maybe not drunk but likely high.
I could see Nvidia adding terms of sale requiring disposal rather than resale.
I also don't think companies are going to have mandatory replacement cycles for GPU hardware the same way they do for everything else, because:
1. It is an order or magnitude (or more) more expensive.
2. It isn't clear whether Moore's law will apply to the AI GPU space the same way it has for everything else.
Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
That's exactly the point.
Performance/watt is increasing so much gen-to-gen that it makes no longer sense to run older hardware.
Not my words, Jensen's.
New stuff is all liquid cooled by default and that's a paradigm shift for your average home lab.
I'm less aware of exactly what's happening on the power side of things but I think some of the architectures are now moving to relatively high voltage DC throughout and then down converting it to low voltage right before it's used. So not exactly just plug-and-play with your average nema15 outlet.
There are PCIe versions of these right? And another comment is saying there are PCI adapters too. It "only" requires 600 to 700W. It's not out of reach for everybody.
If the used regular server market is any indication, you can find, after a few years, a lot of enterprise gear at totally discounted prices. CPU costing $4K brand new for $100 after a few years: stuff like that.
A friend has got a 42U rack and so do some homelab'ers. People have been running GPU farms mining cryptocurrencies or doing "transcoding" (for money).
It's not just CPUs at 1/40th of their brand new price: network gear too. And ECC RAM (before the recent RAM craze).
I'm pretty sure that if H200 begin to flood the used market, people shall quickly adapt.
> Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
I agree with that. But if they resell old H200s, people are resourceful and shall find a way to run these.
You'd be better off with the SXM-PCIe adapters.
It's a monolithic 8U rackmount appliance so perhaps a dishwasher would make for a decent size comparison?
Definitely no good if you rent but homeowners should have little to no difficulty. The sort of people interested in such gear usually have multi kW racks already.
My last employer is still running a bunch of otherwise discontinued g3 instances with 2015 era GPUs.
I bought a used NEC SX Aurora TSUBASA (PCIe x16 board that looks like a GPU board) and realized it has no fans. The server case it is designed to fit into is pressurized by fans forcing air through eight cards on a special 4 + 4 slot motherboard. I have to stack and mount three 40mm fans on the back.
https://www.youtube.com/watch?v=1H3xQaf7BFI&t=1577s
in the States.
In order to take advantage of that, someone needs to be positioned to process all that material economically, and to make the logistics achievable by the big players. If it costs Facebook $10million to store and transport phased out gpus vs just sending them to a landfil, they're not going to do it. If they get $100k for recycling - probably not going to do it. If they pocket $5 million, they will definitely contract that out, especially if it costs $50 million to build out the infrastructure to handle it.
Probably a good company idea - transport, disposal, refurbishment of out of cycle GPUs and datacenter assets, creating a massive recycling pipeline for recapturing all the valuable elements is a pretty good niche.
This site apparently sources ex-enterprise(-only) systems and puts them into desktop style enclosures.
Would be interested to know if others have takes on this.
A couple real world points:
1. They generally don't just fail. More likely a repairable component on a board fails and you can send it out to be repaired.
2. For my current stuff, I have a 3 year pro support contract that can be extended. Anything happens, Dell goes and fixes it. We also haven't had someone in our cage at the DC in over 6 months now.
Why would them sell it cheaper to the 2nd market??
It will hurt the sales of new ones. This is the way even with food, let alone technology. Don't expect to buy cheaper 2nd GPU any century soon.
Their databases are heavily used in government, banking and other large industries which have been slower to adapt to change and strugglyto migrate away. At what point does purchasing oracle to gain customer share, existing data centres and the opportunity to migrate to your cloud platform make more sense than competing?
They still have a high market value. However, the debt they will need to service will result in ongoing price increases which will encourage people to migrate away. Over time they will struggle to service the debt and a buyout will be the best of the bad options.
They do a lot of stuff. Also own Hashicorp now, so they have things like: Ansible / RedHat Linux (already owned), Terraform, Consul, Nomad, Packer, etc. A lot of "let's build modern infra" tooling.
That stopped being true many years ago though, and the divergence has only accelerated with the advent of AI datacenter usage. The form factor is now fundamentally different (SXM instead of PCIe); you can adapt an SXM card to PCIe with some effort [1], but that may not even be worthwhile because 1. the power and cooling requirements for the SXM cards are radically different than a desktop part and more importantly 2. the dies are no longer even close to being the same. IIRC, Blackwell AI chips straight up don't have rasterization hardware onboard at all; internally they look like a moderate number of general SMs attached to a huge number of tensor core. Modern AI GPUs are fundamentally optimized for, well, mat-mults, which is not at all what you want for gaming or really any non-AI application.
[1] https://l4rz.net/running-nvidia-sxm-gpus-in-consumer-pcs/
If you're Oracle it's not necessary a bad thing if you build an antiquated data center. Isn't much of their customer base legacy customers they are rent-seeking from in perpetuity? Those people are never going to be doing cutting edge AI. They will do what they have always done: adopt new technologies right at the nadir of the Trough of Disillusionment.
So they have to hope they’re a part of the future in the AI capacity because their SaaS business is going to take a big hit.
YTD performance didn’t fully bake this reality in. It was seen as them having 2 huge revenue streams, the market is realizing that AI is a threat to SaaS and baking that into stonks
Are they? Unless you are Nvidia that is very far from the case.
OpenAI's current revenue is $25 billion a year. They are expected to spend $600 billion on infrastructure in the next 4 years to sustain and grow that revenue.
Amazon, Google, Microsoft and Meta are spending a combined $650 billion on infrastructure in 2026 alone.
The story is the same across the rest of the industry.
None of these investments are immediately profitable. And it remains to be seen whether they eventually will be or not.
If you're OpenAI spending $100M on a training run they're not.
But if you're Oracle renting out GPUs to little guys doing inference, they are.
Stargate is backed by the US gvt hence why they're comfortable to put that under debt financing
If it's built in stages each state will have never variants of hardware I imagine.
https://www.msn.com/en-us/money/general/as-oracle-plans-thou...
David Ellison is fueling his buying spree with debt guaranteed by his dad's oracle shares. The various assets David has bought are already suffering losses of viewership because viewers are turned off by their new ideological slant.
Usually debt investors are not worried if the stock price is high. Debt has precedence over equity, so if the stock price is riding high, the CEO can always be convinced to print more shares to service the debt. The Oracle stock price has not been doing that hot lately, however. As the article said, it is 50% down. Still ORCL has 430 Billion market cap in comparison with 130 Billion of debt. It seems manageable. But stock prices can move very fast. Ironically, the war in Iran, which David's new news sources keep supporting is causing ORCL stock to go down which can bring down David's new media empire.
David just purchased Warner Bros for about 110. A lot of that (40 billion) is also guaranteed by daddy's ORCL shares. Warner Bros owns Comedy Central, which sadly has been one of Americas most dependable news sources.
The house of cards is still standing but its getting awfully wobbly.
https://en.wikipedia.org/wiki/Power_Macintosh_7100
Sagan sued. Engineers at Apple changed the name to BHA: "Butt-Head Astronomer".
He sued again. The final codename was "LAW: Lawyers are Wimps".