I wonder if in 2025 a company would even allowed to start before being curb stomped by Intel's IP lawyers. After all, they started making clones, something that China gets accused of a lot.
Intel customers required a second source supplier, i.e. IBM, thus, AMD was providing that for Intel in the beginning. Then later on AMD created the x86 64bit commands, which Intel adopted from AMD so now both share the same ISA.
You can do it with HW accelerated emulation like Apple did with M1 CPUs. They implemented x86 compatible behavior in HW so the emulation has very good performance.
Another approach was Transmeta where the target ISA was microcoded, therefore done in "software".
They said that they implemented x86 ISA memory handling instructions, that substantially sped up the emulation. I don't remember exactly which now, but they explained this all in a WWDC video about the emulation.
Not instructions per se. Rosetta is a software based binary translator, and one of the most intensive parts about translating x86 to ARM is having to make sure all load/store instructions are strictly well ordered. To alleviate this pressure, Apple implemented the Total Store Ordering (TSO) feature in hardware, which makes sure that all ARM load and store instructions (transparently) follow the same memory ordering rules as x86.
If Intel decide to focus on Foundry, I just wish AMD and Intel could work together and make a subset clean up of x86 ISA open source or at least available for licensing. I dont want it to end up like MIPS or POWER ISA where everything is too little too late.
>I just wish AMD and Intel could work together and make a subset clean up of x86 ISA
AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing
Standardizing x86 features
Key technical milestones, include:
FRED (Flexible Return and Event Delivery): Finalized as a standard feature, FRED introduces a modernized interrupt model designed to reduce latency and improve system software reliability.
AVX10: Established as the next-generation vector and general-purpose instruction set extension, AVX10 boosts throughput while ensuring portability across client, workstation, and server CPUs.
ChkTag: x86 Memory Tagging: To combat longstanding memory safety vulnerabilities such as buffer overflows and use-after-free errors, the EAG introduced ChkTag, a unified memory tagging specification. ChkTag adds hardware instructions to detect violations, helping secure applications1, operating systems, hypervisors, and firmware. With compiler and tooling support, developers gain fine-grained control without compromising performance. Notably, ChkTag-enabled software remains compatible with processors lacking hardware support, simplifying deployment and complementing existing security features like shadow stack and confidential computing. The full ChkTag specification is expected later this year – and for further feature details, please visit the ChkTag Blog.
ACE (Advanced Matrix Extensions for Matrix Multiplication): Accepted and implemented across the stack, ACE standardizes matrix multiplication capabilities, enabling seamless developer experiences across devices ranging from laptops to data center servers.
A subset of an ISA will be incompatible with the full ISA and therefore be a new ISA. No existing software will run on it. So this won't really help anyone.
And x86 isn't that nice to begin with, if you do something incompatible, you might as well start from scratch and create a new, homogenous, well-designed and modern ISA.
90s x86 from ISA pov is already free to use, no? The original patents must have expired and there's no copyright protection of ISAs. The thing keeping the symbiotic cross-licensed duopoly going is mutating the ISA all the time so they can mix in more recently patented stuff.
AFAIK, most of event x86_64 patents are largely expired, or will be within the next 6 years. That said, efforts for a more open platform are probably more likely to be centered around risc or another arm alternative than x86... While I could see a standardization of x86 compatible shortcuts for use with emulation platforms on arm/risc processors. Transmeta was an idea too far ahead of its time.
Remembering the Mac ARM transition pain wrt Docker and Node/Python/Lambda cross builds targeting servers, there's a lot to be said for binary compatibility.
90% of those problems effect people like you and I, developers and power users, not "regular" users of machines who are mostly mobile device and occasional laptop/desktop application users.
I suspect we'll see somebody -- a phone manufacturer or similar device -- make a major transition to RISC-V from ARM etc in the next 10 years that we won't even notice.
>AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing
Oct 13, 2025
Standardizing x86 features
Key technical milestones, include:
FRED (Flexible Return and Event Delivery): Finalized as a standard feature, FRED introduces a modernized interrupt model designed to reduce latency and improve system software reliability.
AVX10: Established as the next-generation vector and general-purpose instruction set extension, AVX10 boosts throughput while ensuring portability across client, workstation, and server CPUs.
ChkTag: x86 Memory Tagging: To combat longstanding memory safety vulnerabilities such as buffer overflows and use-after-free errors, the EAG introduced ChkTag, a unified memory tagging specification. ChkTag adds hardware instructions to detect violations, helping secure applications1, operating systems, hypervisors, and firmware. With compiler and tooling support, developers gain fine-grained control without compromising performance. Notably, ChkTag-enabled software remains compatible with processors lacking hardware support, simplifying deployment and complementing existing security features like shadow stack and confidential computing. The full ChkTag specification is expected later this year – and for further feature details, please visit the ChkTag Blog.
ACE (Advanced Matrix Extensions for Matrix Multiplication): Accepted and implemented across the stack, ACE standardizes matrix multiplication capabilities, enabling seamless developer experiences across devices ranging from laptops to data center servers.
Copying and pasting a press release does not make for a good comment. Especially because you don't seem to have understood what you pasted in, or the context of this discussion. What you're demonstrating is several more new features added to the pile. Intel's retracted X86S proposal was actually about removing legacy features, creating a cleaner subset for the modern era.
In which space? Desktop and high performance servers? Why would it?
Mature gallery of software to be ported from TSO to weak memory model is a soft moat. So is avx/simd mature dominance vs neon/sve. x86/64 is a duopoly and a stable target vs fragmented landscape of ARM. ARM's whole spiel is performance per watt, scale out type of thing vs scale up. In that sense the market has kind of already moved. With ARM if you start pushing for sustained high throughput, high performance, 5Ghz+ envelope, all the advantages are gone in favor of x86 so far.
What might be interesting is if let's say AMD adds an ARM frontend decoder to Zen. In one of Jim Keller's interviews that was shared here, he said it wouldn't be that big of a deal to make such a CPU for it to be an ARM decoding one. That'd be interesting to see.
> In which space? Desktop and high performance servers? Why would it?
Laptops. Apple already owned the high margin laptop market before they switched to ARM. With phones, tablets, laptops above 1k, and all the other doodads all running ARM, it's not that x86 will simply disappear. Of course not. But the investments simply aren't comparable anymore with ARM being an order of magnitude more common. x86 is very slowly losing steam, with their chips generally behind in terms of performance per watt. And it's not because of any specific problem or mistake. It's just that it no longer makes economic sense.
Well, given some of the political/legal gamesmanship over the company itself the past few years, it could very well self destruct in favor of RISC-V or something else entirely in the next decade, who knows.
Look how long SPARC, z/Architecture, PowerPC etc have kept going even after they lost their strong positions on the market (a development which is nowhere in sight for x86), and they had a tiny fraction of the inertia of x86 softare base.
Obliterating x86 in that time would take quite a lot more than what the ARM trajectory is now. It's had 40 years to try by now and the technical advantage window (power efficieny advantage) has closed.
20 years is half of x86's lifetime and less than half of the lifetime of home computing as we know it.
So this is kind of a useless question, because in such a timespan anything can happen. 20 years ago computers had somewhere around 512MB of RAM and a single core and had a CRT on desk.
I'm still a heavy advocate for requiring second/dual-sourcing in govt contracts... literally for anything that can be considered essential infrastructure or communications technology and medicine. A role of govt in a capitalist society is to ensure competition and domestic availability/production as much as possible.
While my PoV is US centered, I feel that other nations should largely optimize for the same as much as possible. Many of today's issues stem from too much centralization of commercial/corporatist power as opposed to fostering competition. This shouldn't be in the absence of a baseline of reasonable regulation, just optimizing towards what is best for the most people.
Suppose we got nuked or some calamity caused the interruption of all the fancy x-nanoneter processes. What would we actually miss out on? I don't know what the latest process nodes we have stateside are, but let's say we could produce 2005 era cpus here. What would we actually miss out on? I don't think it would affect anything important. You could do everything we do today, just slower. I think the real advancement is in software, programming languages, and libraries.
I'm talking about way more than just CPUs... And for your question, we'd pretty much miss out on modern-like mobile phones entirely. 90nm -> 18A/1.8nm is a LOT of reduction in size and energy... not to count the evolution in battery and display technology over the same period.
Now apply that to weapons systems in conflict against an enemy that DOES have modern production that you (no longer) have... it's a recipe for disaster/enslavement/death.
China, though largely hamstrung, is already well ahead of your hypothetical 2005 tech breakpoint.
Beyond all this, it's not even a matter of just slower, it's a matter of even practical... You couldn't viably create a lot of websites that actually exist on 2005 era technology. The performance memory overhead just weren't there yet. Not that a lot of things weren't possible... I remember Windows 2000 pretty fondly, and you could do a LOT if you had 4-8x what most people were buying in RAM.
Seems like an interesting story, Ashawna - she was about 25 at the time, and as per Wikipedia, already worked on the military projects - the Sprint Missile System, and was at Xerox.
> The processor was reverse-engineered by Ashawna Hailey, Kim Hailey and Jay Kumar. The Haileys photographed a pre-production sample Intel 8080 on their last day in Xerox, and developed a schematic and logic diagrams from the ~400 images.
See tangentially related topic from yesterday: https://news.ycombinator.com/item?id=46362927
Another approach was Transmeta where the target ISA was microcoded, therefore done in "software".
AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing
Standardizing x86 features
Key technical milestones, include:
And x86 isn't that nice to begin with, if you do something incompatible, you might as well start from scratch and create a new, homogenous, well-designed and modern ISA.
So it would be faster and more efficient when sticking to the new subset and Nx slower then using the emulation path.
I suspect we'll see somebody -- a phone manufacturer or similar device -- make a major transition to RISC-V from ARM etc in the next 10 years that we won't even notice.
>AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing
Oct 13, 2025
Standardizing x86 features
Key technical milestones, include:
Lunar Lake shows that x86 is capable of getting that energy efficiency
Panther Lake that will be released in around 30 days is expected to show significant improvement over Lunar Lake
So... why switch to ARM if you will get similar perf/energy eff?
Mature gallery of software to be ported from TSO to weak memory model is a soft moat. So is avx/simd mature dominance vs neon/sve. x86/64 is a duopoly and a stable target vs fragmented landscape of ARM. ARM's whole spiel is performance per watt, scale out type of thing vs scale up. In that sense the market has kind of already moved. With ARM if you start pushing for sustained high throughput, high performance, 5Ghz+ envelope, all the advantages are gone in favor of x86 so far.
What might be interesting is if let's say AMD adds an ARM frontend decoder to Zen. In one of Jim Keller's interviews that was shared here, he said it wouldn't be that big of a deal to make such a CPU for it to be an ARM decoding one. That'd be interesting to see.
Laptops. Apple already owned the high margin laptop market before they switched to ARM. With phones, tablets, laptops above 1k, and all the other doodads all running ARM, it's not that x86 will simply disappear. Of course not. But the investments simply aren't comparable anymore with ARM being an order of magnitude more common. x86 is very slowly losing steam, with their chips generally behind in terms of performance per watt. And it's not because of any specific problem or mistake. It's just that it no longer makes economic sense.
Obliterating x86 in that time would take quite a lot more than what the ARM trajectory is now. It's had 40 years to try by now and the technical advantage window (power efficieny advantage) has closed.
So this is kind of a useless question, because in such a timespan anything can happen. 20 years ago computers had somewhere around 512MB of RAM and a single core and had a CRT on desk.
While my PoV is US centered, I feel that other nations should largely optimize for the same as much as possible. Many of today's issues stem from too much centralization of commercial/corporatist power as opposed to fostering competition. This shouldn't be in the absence of a baseline of reasonable regulation, just optimizing towards what is best for the most people.
Now apply that to weapons systems in conflict against an enemy that DOES have modern production that you (no longer) have... it's a recipe for disaster/enslavement/death.
China, though largely hamstrung, is already well ahead of your hypothetical 2005 tech breakpoint.
Beyond all this, it's not even a matter of just slower, it's a matter of even practical... You couldn't viably create a lot of websites that actually exist on 2005 era technology. The performance memory overhead just weren't there yet. Not that a lot of things weren't possible... I remember Windows 2000 pretty fondly, and you could do a LOT if you had 4-8x what most people were buying in RAM.
If society as a whole reverted to 2005, we would be fine.
Definitely read that wrong the first time I skimmed the article
> The processor was reverse-engineered by Ashawna Hailey, Kim Hailey and Jay Kumar. The Haileys photographed a pre-production sample Intel 8080 on their last day in Xerox, and developed a schematic and logic diagrams from the ~400 images.