AMD has been making ARM chips for a long time (they bought Xilinx and have been an ARM licensee for forever). This is just their first APU (graphics plus cpu) with an ARM core as the CPU.
Maybe it would be better for them to enter the Risc-V market. My working thesis is that SoftBank had milked the Arm cow dry. That is why they took it public to pawn the dregs off onto the retail investor. Paying for an Arm license is wasted money and akin to paying for an OS these days.
> Maybe it would be better for them to enter the Risc-V market
The demand isn't there for the RISC-V product. AMD is exploring this space[1][2] but they aren't bringing them to market because sufficient demand isn't there yet.
This is a "why not both" strategy: ARM has the market share whereas the RISC-V ecosystem is still being built up. Once you have a RISC based chip, it's not nearly as much work to change to another RISC ISA.
I don't think AMD had the money to properly execute on both Zen and this K12 ARM chip. So they chose the more safer bet of Zen which seems to have worked out really well for them.
Funny how some of his projects got cancelled like K12 at AMD or Royal Core at INTC and people always act like that was terrible decision, yet AMD is up like 100x on stock market and INTC... times gonna tell
Seems completely uncorrelated with what is discussed especially considering Intel didn’t enter the ARM market either.
Would make much more sense to compare with Qualcomm trajectory here as they dominate the high end ARM SoC market.
Basically AMD missed the opportunity to be first mover on a market which is now huge with a project Apple proved to be viable three years after the planned AMD release. Any way you look at it, it seems like a major miss.
The fact that other good decisions in other segments were made at the same time doesn’t change that.
> Basically AMD missed the opportunity to be first mover on a market which is now huge with a project Apple proved to be viable three years after the planned AMD release. Any way you look at it, it seems like a major miss.
I don't think this is a fair position. It could as well be that focusing in K12 would have delayed Zen, maybe delaying it enough that it could have become irrelevant by the time it got to market.
Remember that while Zen was a good CPU, the only reason it made as much impact as it did was because it also was released in a good time (when Intel was stumbling with 10nm and releasing Skylake refresh after Skylake refresh).
AMD was a pretty stripped down company at that point. They'd bet it all on Zen so when it got a foothold it made sense to double down on it until they could recover.
The thing about being broke is you may know about good opportunities but not have the resources to actually make use of them.
>> It could as well be that focusing in K12 would have delayed Zen, maybe delaying it enough that it could have become irrelevant by the time it got to market.
Agree. AMD stock was under $2 prior to Zen. Buying was a bet that Zen would be competitive with Intel in which case the stock would come back, otherwise they were doomed. The first Zen chips were in fact competitive but beat Intel in some benchmarks and lost in others. That would have brought back competition, but who knew Intel would flounder for many more years while Zen got a nice uplift with each generation! Delaying Zen would have been bad for AMD, but in hindsight that wouldn't have mattered so long as they could stay afloat til it launched.
>Basically AMD missed the opportunity to be first mover on a market which is now huge with a project Apple proved to be viable three years after the planned AMD release. Any way you look at it, it seems like a major miss.
No man, apple basically had the power to frog march it's app devs to a new cpu arch. That absolutely would not have happened in the windows ecosystem given the amount of legacy apps and (arguably more importantly) games. For proof of this you need look no further than Itanium and windows arm
If most Intel hardware makers had gone full ARM, they would simply have lost market share. Apple customers are going to buy Apple hardware—whatever it has inside.
But of course Apple controls not just the hardware but the OS. So ya, if only Apple hardware will run your application, you are going to port to that hardware.
Apple has a massive advantage in these transitions for sure.
Microsoft already had an example of how to do this in a reasonable fashion. Not only that, but the original developer was an ARM licensee. And then finally, during that era Windows was still being developed for multiple architectures.
Apple and Qualcomm _have_ to use Arm ISA because they don't have x86 license. Apple would have likely stayed on x86 if they could use it in their in-house designs. Intel wouldn't issue x86 license to Qualcomm or Apple, of course.
Very unlikely. Moving to Arm allowed Apple to have a single architecture across all their hardware and leverage iPhone designs as the starting point for Mac SoCs.
If Intel would have licensed x86 for use in Apple’s own finished computers, Intel would be in a way better position. Foolish not just to lose that customer but also to legitimize ARM as a desktop and high-end option.
I think Apple would have switched anyway though. They designed Apple Silicon for their mobile devices first (iPhone, iPad) which I doubt they would have made x86. The laptops and desktops are the same ISA as the iPhone (strategically).
If AMD released a desktop class ARM processor at that time, what software would it have run?
Apple had already switched cpus in Macs twice, it's not surprising that they could do it again, but would they have switched from Intel x86 to AMD ARM when they never used any AMD x86? Seems unlikely.
Focusing on a product that would sell on day one rather than one that would need years to build sales makes sense for a company that was struggling for relevance and continued operations.
Nobody supports the new ISA because there is no SoC and nobody makes the new SoC because there is no support. But in this case, that’s not really true because Linux support was ready.
More than forcing volumes, Apple proved it was worth it because the efficiency gains were huge. If AMD had release a SoC with numbers close to the M1 before Apple targeting the server market, they had a very good shot at it being a success and leveraging that to success in the laptop markets where Microsoft would have loved to have a partner ready to fight Apple and had to wait for Qualcomm for ages.
Anyway, I stand that looking at how the stock moved tells us nothing about if the cancellation was a good or a bad decision.
>More than forcing volumes, Apple proved it was worth it because the efficiency gains were huge. If AMD had release a SoC with numbers close to the M1 before Apple targeting the server market, they had a very good shot at it being a success and leveraging that to success in the laptop markets where Microsoft would have loved to have a partner ready to fight Apple and had to wait for Qualcomm for ages.
Apple proved that creating your own high end consumer SoC was doable and viable idea due to TSMC and could result in better chips due to designing them around your needs.
And which ISA they could use? X86? Hard to say, probably no. So they had RISCV and ARM
Also about Windows...
If PantherLake on 18A actually performs as good as expected, then why would anyone move to ARM on Windows when viable energy eff. cpus like lnl and ptl are available
> If PantherLake on 18A actually performs as good as expected, then why would anyone move to ARM on Windows when viable energy eff. cpus like lnl and ptl are available
Well yes, exactly, that’s the issue with arriving 10 years later instead of being first mover. The rest of the world doesn’t remain unmoving.
5W while browsing is already less efficiënt than my old laptop with a Zen 2 CPU (and most of the power is consumed by the display). Newer CPU's or SoC's should do quite a bit better than that.
During "light" browsing pretty much any laptop's power use is massively dominated by things that aren't the CPU, assuming there's been any attempt at enabling that use case (which doesn't always seem to be the case for many SKUs, certainly on the cheaper end).
A huge amount of Apple's competitive edge is in the "other 90%", but they don't seem to get the headlines.
Intel specifically exited the general-purpose ARM market back about 20 years ago when it sold its XScale division to Marvell. I believe it kept making the ARM chips for use in network controllers and other specific purpose chips.
Intel failed to anticipate the smartphone revolution despite RIM being a customer of XScale. To be fair, they only entered because they got StrongARM from a law suit settlement with DEC in 1997 and they sold to refocus on more strategic segments which turned out to be actually a lot less interesting. I don’t think Intel can really be seen as a model of good strategic thinking.
But all of this is a decade before what we are discussing here. I didn’t even remember XScale existed at Intel while writing my first comment.
When the Microelectronics Group was transferred to Intel,
that included the StrongARM Group. A month later, everybody
in the StrongARM Group had pretty much quit.
>> Seems completely uncorrelated with what is discussed especially considering Intel didn’t enter the ARM market either.
I don't think AMD should be following Intel in markets outside x86. I want to see them go RISC-V with a wide vector unit. I'd like to see Intel try that too, but they're kind of busy fixing fabs right now.
Yeah, sure, remind me what were Qualcomm results last year. 10 billions?
But, don't get me wrong, I wouldn't spit on McDonalds 6 billions either and the soybean market is one of the fastest growing in the agrifood business, with huge volume traded, probably one of the most profitable commodity at the moment.
> Yeah, sure, remind me what were Qualcomm results last year. 10 billions?
How much of Qualcomm's profit comes from providing yet another ARM chip vs. all the value-added parts they provide in the ARM SoC's, like all the radio modem stuff necessary for mobile phones?
Now that's kind of a rhetorical question, not sure a clear answer exists, at least not outside Qualcomm internal finance figures. Food for thought, though.
(That's sort of the logic behind RISC-V as well. The basic ISA and the chip that implements it is a commodity, the value comes from all the application specific extra stuff tacked on to the SoC.)
AMD is doing well because they moved on chiplets before Intel did. The decision of ARM vs x86 is pretty much unrelated to the move that saved them, and sticking with the architecture with which they had decades of experience was probably a good idea.
I mean Keller is talking about a decision to not pursue an ARM chip that he’d apparently been working on after(?) Zen 2 (or maybe in parallel). So AMD was already back on a good path at that point.
Keller himself credits the many people responsible for the contributing parts [0]. I think the general 'enthusiast' tech press and reporting likes hero figures and the simplicity it brings, even better when you can cast a good guy against a bad guy, and the background in this case would be AMD vs intel.
Humans inherently like having a narrative. When we discuss historical events, we typically want to have a clearly defined leader and/or visionary upon whom to pin events. Without this, our imaginations aren't as engaged, and therefore emotions aren't stirred. For example, the stories of early game companies are great because the teams were very small, a narrative can be written, and the product was fun. With modern games, budgets are massive, teams are massive, and things are often designed and approved by committee. The result can be beautiful and fun, but the story getting there isn't as entertaining.
AMD was being pummeled by Intel during the time Jim was there and was only the giant we know today when Jim left. AMD did this, mostly, by x64 server market. So AMD did what it could to get around intel's and apple's monopolies. Intel on the server end came in at a higher price, and often worse performance or more power usage, or both. I'm not sure who AMD would have sold ARM to in 2010's. Apple didnt want their product and made their own, cell phone companies were cozy with the established ARM vendors, MS wouldnt launch an actual ARM laptop for years, data centers didn't want it, etc.
Look at intel's various arm or embedded offerings it keep canceling. It can't find buyers. Qualcomm and Samsung other vendors just keep eating up sales in ARM.
Now I imagine AMD sees ARM servers as the future and wants to make sure not to be left behind, on top of ARM desktop/laptop and further embedded.
I think this mostly a sign the world is now moving away from the old x86/64 system that ruled technology for so long. AMD is needs to stay competitive here.
Consider that Amd was not far from bankruptcy. They couldn't even execute on their gpu chips consider that they were the duopoly with nvidia and mostly missed the ai wave. Do they even have the capacity to work on arm on top of that?
It is easy to talk after the fact. The giant market opener for ARMs was Apple, many times in business it is better to he a follower once a big market arises.
Not sure what their intention is of course, but nowadays there is A LOT of Cortexes in various sound gear. Plenty in things like Eurorack but also outboard equipment like the Eventide H9000 etc.
A common trend in audio systems is that the market cap is too small for economies of scale when it comes to commodity parts like processors. There are a handful of audio-specific chips that are common but processors are not one of them (any more).
Better (or simply more) ARM processors, no matter who makes them, are a win. They tend to be far more power-efficient, and with performance-per-watt improving each generation, pushing for wider ARM adoption is a practical step toward lowering overall energy consumption.
With the caveat that ARM isn't a industry standard like PC has become, thus while propritary OSes can thrive, FOSS has a much higher challenge other than OEM specific distros or downstream forks.
Practically speaking, very few systems actually support SystemReady. There's an experimental port of edk2 for the Raspberry Pi, but some hardware is unavailable when using it.
They aren't inherently power efficient because of technical reasons, but because of design culture reasons.
Traditionally x86 has been built powerful and power hungry and then designers scaled the chips down whereas it's the opposite for ARM.
For whatever reason, this also makes it possible to get much bigger YoY performance gains in ARM. The Apple M4 is a mature design[0] and yet a year later the M5 is CPU +15% GPU +30% memory bandwidth +28%.
The Snapdragon Elite X series is showing a similar trajectory.
So Jim Keller ended up being wrong that ISA doesn't matter. Its just that it's the people in the ISA that matter, not the silicon.
[0] its design traces all the way back to the A12 from 2018, and in some fundamental ways even to the A10 from 2016.
People are absolutely part of an ISA's ecosystem. The ISA is the interface between code and CPU, but the code is generally emitted by compilers, and executed in the context of runtimes and operating systems, all designed by people and ultimately dependent on their knowledge of and engagement with the ISA. And for hot code in high-performance applications, people will still be writing directly in assembler directly to the ISA.
Do you have any actual evidence for that? Intel does care about power efficiency - they've been making mobile CPUs for decades. And I don't think they are lacking intelligent chip designers.
I would need some strong evidence to make me think it isn't the ISA that makes the difference.
Basically, x86 uses op caches and micro ops which reduces instruction decoder use, the decoder itself doesn't use significant power, and ARM also uses op caches and micro ops to improve performance. So there is little effective difference. Micro ops and branch prediction is where the big wins are and both ISAs use them extensively.
If the hardware is equal and the designers are equally skilled, yet one ISA consistently pulls ahead, that leads to the likely conclusion that the way the chips get designed must be different for teams using the winning ISA.
For what it's worth, the same is happening in GPU land. Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W (!).
> Isn't Lunar Lake first mobile chip with focus on energy eff?
Not by a long shot.
Over a decade ago, one of my college professors was an ex-intel engineer who worked on Intel's mobile chips. He was even involved in an Intel ARM chip that ultimately never launched (At least I think it never launched. It's been over a decade :D).
The old conroe processors were based on Intel's mobile chips (Yonah). Netburst didn't focus on power efficiency explicitly so and that drove Intel into a corner.
Power efficiency is core to CPU design and always has been. It's easy create a chip that consumes 300W idle. The question is really how far that efficiency is driven. And that may be your point. Lunar Lake certainly looks like Intel deciding to really put a lot of resource on improving power efficiency. But it's not the first time they did that. The Intel Atom is another decades long series which was specifically created with power in mind (the N150 is the current iteration of it).
Actually, if you had made an opposite example, it might have gone against your point. ;) C# gives you a lot more control over memory and other low-level aspects, after all.
> It is like saying that Java syntax is faster than C# syntax.
Java and C# are very similar so that analogy might make sense if you were comparing e.g. RISC-V and MIPS. But ARM and x86 are very different, so it's more like saying that Go is faster than Javascript. Which... surprise surprise it is (usually)! That's despite the investment into Javascript implementation dwarfing the investment into Go.
According to an AMD engineer I asked at the time, when they evaluated Ryzen/K12, it was "maybe" a 15% advantage for ARM depending on scenarios.
The efficiency came solely from the frontend which is a lot heavier on x86, and stay up longer because decoding is way more complex. The execution units were the same (at least mostly, I think, might be misremembering) so once you are past the frontend there's barely any difference in power efficiency.
Actually power efficiency was a side effect of having a straightforward design in the first ARM processor.
The BBC needed a cheap (but powerful) processor for the Acort computer and a RISC chip was
When ARM started testing their processor, they found out it draw very little power...
Acorn won the bid to make the original BBC home computer, with a 6502-based design.
Acorn later designed their own 32-bit chip, the ARM, to try to leapfrog their competitors who were moving to the 68000 or 386, and later spun off ARM as a separate company.
For a CPU vendor, ISA is very relevant: most buyers will start their buying decision with ISA choice already fixed, and a vendor who can't offer a CPU with that ISA simply isn't in the race.
It does not matter whether you are a believer in horses for courses when it comes to ISA, or a believer in "frontend ISA does not matter because it's all translated away anyways": when buyers don't want what you have, you are out. And buyers are more like a stampeding herd than like rational actors when it comes to ISA choice. I'd see offering CPU for multiple ISA as an important hedge against the herd changing direction.
What AMD wants to achieve with their CPU: sell them, preferably at a nice profit. If ISA is truly not relevant for performance and efficiency characteristics, all the more reason for them to not bet on any particular ISA but spread out, to already be there wherever the buying herd goes.
Meh, performance-per-watt is not what everybody wants. I only want it in that it affords more raw performance by allowing more watts to be pumped through it without thermal overload. But if that can't actually happen then I'm still more interested in x86. Sure the lights dim when I turn my PC on, but I want the performance.
I run desktop linux via postmarketOS on a Lenovo Duet 5 (Snapdragon 7c). It isn't the most powerful device and the webcam doesn't work but other than that it works well and battery life is excellent
IIRC, it's because the ARM designs tend to use camera modules that come from smartphone-land.
Cameras used on x86-64 usually just work using that usb webcam standard driver (what is that called again? uvcvideo?).
But these smartphone-land cameras don't adhere to that standard, they probably don't connect using USB. They are designed to be used with the SoC vendor's downstream fork of Android or whatever, using proprietary blobs.
A similar thing is happening in Intel land recently, where the cameras use ipu6 / ipu7 chips rather than dumping simple frames over USB. But this way we get a higher resolution / quality at least.
It’s usually MIPI or some variant. There’s probably a way to enable the video stream but you also have to talk to the control module itself which is on a different bus.
AMD makes laptop CPUs with good performance per power consumption ratio, but they are designed for high power consumptions, typically for 28 W, or at least for 15 W.
AMD does not have any product that can compete with Intel's N-series or industrial Atom CPUs, which are designed for power consumptions of 6 W or of 10 W and AMD never had any Zen CPU for this power range.
If the rumors about this "Sound Wave" are true, then AMD will finally begin to compete again in this range of TDP, a market that they have abandoned many years ago (since the AMD Jaguar and Puma CPUs), because all their resources were focused on designing Zen CPUs for higher TDPs.
For cheap and low-power CPUs, the expensive x86-64 instruction decoder may matter, unlike for bigger CPUs, so choosing the Aarch64 ISA may be the right decision.
Zen compact cores provide the best energy efficiency for laptops and servers, especially for computation-intensive tasks, but they are not appropriate for cheap low-power devices whose computational throughput is less important than other features. Zen compact cores are big in comparison with ARM Cortex-X4, Intel Darkmont or Qualcomm cores and their higher performance is not important for cheap low-power devices.
> AMD does not have any product that can compete with Intel's N-series or industrial Atom CPUs, which are designed for power consumptions of 6 W or of 10 W and AMD never had any Zen CPU for this power range
A cursory search shows that the AMD APU used in the Valve Steam Deck draws 3-15W. Limiting the TDP to 6W on a Steam Deck is fine for Linux in desktop mode.
That is a custom APU made by AMD for a certain customer.
It is not a device that AMD sells on the open market, so it does not compete with the ubiquitous Intel N-series CPUs or with the Arm-based CPUs from various vendors.
Like I have said, since Jaguar and Puma, which are older than the first Zen, AMD has never sold on the open market any CPU/APU designed for a TDP of 10 W or less.
While for some AMD APUs, like Ryzen Z1, which are designed for a TDP of 15 W, their specification says that they have a TDP that is configurable down to 9 W, when such CPUs are configured for a lower TDP than they are optimized for, they become inefficient, by having a bigger die area, i.e. a higher cost, and a lower energy efficiency, in comparison with the CPUs that have been specifically designed for that lower power.
“IT Home News on October 13, @Olrak29_ found that the AMD processor code-named "Sound Wave" has appeared in the customs data list, confirming the company's processor development plan beyond the x86 architecture”
I think that means they are planning to export parts.
I think there still is some speculation involved as to what those parts are, and they might export them only for their own use, but is that likely?
I think the chips are being imported from Taiwan to the US. At this point they are prototypes being tested. AMD wouldn't make a chip this complex for their own use; these were likely ordered by Microsoft or Valve.
It is interesting for AMD because having a on-par ARM chip means they can keep selling chips when the rest of the market switch to ARM. This is largely driven by Apple and by the cloud providers wanting more efficient higher density chips.
Apple isn’t going to switch back to AMD64 any time soon. Cloud providers will switch faster if X64 chips become really competitive again.
I am not sure if cloud providers want ARM - the most valuable resource is rack space, so you want to use the most powerful CPU, not the one using less energy.
The limit is power capacity and quite often thermal. Newer DCs might be designed with larger thermal envelopes, however rack space is nearly meaningless once you exhaust thermal capacity of the rack/isle.
Performance within thermal envelope is a very important consideration in datacenters. If a new server offers double performance at double power it is a viable upgrade path only for DCs that have that power reserve in the first place.
Well, Amazon does offer Graviton 4 (quite fast and useful stuff) along side their Epyc machines so there is some utility to them. A 9654 is much faster than a Graviton 4.
Cooling takes up rack space, too. There also are workloads that aren’t CPU constrained, but GPU or I/O constrained. On such systems, it’s better to spend your heat budget on other things than CPUs.
Oh I hope the price is low enough that this be a real media box chip competitor fir streaming devices. Nvidia Shield Tegra chip from 2015 is still one of the best in this space. And with nvidia making all the AI money is not interested in making a new device. Apple TV the only real alternative does not support audio passthrough so is not as open as android or Linux media boxes.
I think Amlogic, Mediatek and Qualcomm all have SoC which are significantly better than the Tegra for this use case. It’s just that the market barely exists as most consumers use their tv directly so no one really wants to make a media box anymore.
Yeah, whether AMD is willing to go after some market low end has long been TBD. Intel's N100/N97/N150 is everywhere & very affordable, seemingly, based off some of the system prices. AMD doesn't have anything remotely like it.
The chip here is an interesting mix. Fast ddr5-9600! But less GPU CU's than most APUs: 4 down from 6. But if it comes with the other fixings like a good video engine & AMD's very good drivers it could be a real win.
Also a little hopeful that AMD rebadging it's Zen 1 and Zen 2 chips again might possible open up some decent low end space, but Sound Wave with more modern solutions would be a very nice to have power efficient low end.
If AMD releases an APU that is just using off-the-shelf ARM Cortex cores, it will be completely uninteresting and won't matter. Lots of companies have done that. I'd love to see them dedicate an amount of resources to an ARM processor that makes it competitive with their Ryzen x86 line.
While I agree with the general idea, I think the sales pitch is less "a bunch of ARM Cortex cores" and more "a bunch of RDNA2 cores stuck to the same Cortex cores you're used to".
For things like the automotive industry or industrial applications, it could have some sense. Most are high-margin industries ready to commit to specific architectures.
I don't see why Sound Wave would have any advantage, even efficiency, over a similar Zen 5/6 design. Microsoft must really want ARM if they're having this chip made.
The core count is relatively low though. 2P + 4E, whereas Snapdragon-X are 8 or 10 performance cores, indicating that this could be for a low-end tablet ... or game console?
They did countless attempts to use ARM but all failed. Consumers didn't care because they couldn't run their software. Microsoft won't solve the problem until they will provide a way to run all relevant software on ARM.
Microsoft already designed a modified ARM ABI [1] compatible with emulated X86-64 just for this transition. But it's a Windows 11 feature. I wonder if the refusal of many of us to switch from Windows 10 is part of the reason why they're still idling on an ARM strategy.
Part of the issue was incomplete amd64 emulation on windows which is why several MS products continued to ship 32bit - because while they might recompile their software for ARM, business users had binary-only extensions that they expected to continue using.
A year or two ago I used a Windows 11 laptop with an ARM CPU, and at least for me everything just worked. The drivers weren't as good, but all my x86-64 software ran just fine
Its pretty decent. Decent enough in fact that I can run a Windows 11 ARM install on vmware Fusion on my macbook m4 pro, and it will happily run win arm and x86 binaries (via builtin MS x86 emulation) decently fast and without complaint (we're talking apps, gaming I haven't tried.)
Well, I'm eager to use it. For my home server I use an old power-hungry Epyc 7B13. It's overkill but it can run a lot of things (my blog, other software I use, my family's various pre-configured MCPs we use in Custom GPTs, rudimentary bioinformatics). The truth though is that I hate having to cross-compile from my M1 Mac to the x86_64 server. I would much rather just do an ARM to ARM platform cross-compile (way easier to do and much faster on the Orbstack container platform).
So I went out looking for an ARM-based server of equivalent strength to a Mac Mini that I could find and there's really not that much out there. There's the Qualcomm Snapdragon X Elite which is in only really one actual buyable thing (The Lenovo Ideacentre) and some vaporware Geekom or something product. But this thing doesn't have very good Linux support (it's built for ARM Windows apparently) and it's much costlier than some Apple Silicon running Asahi Linux.
So I'm eventually going to end up with some M1 Ultra Studio or an M4 Mini running Asahi Linux, which seems like such a complete inversion of the days when people would make Hackintoshes.
I think that is the point being made. Replace the ARM decoder with a RISC-V one and make a RISC-V chip with SoundWave performance using the RISC-V ISA.
The fact that you have to argue with ARM about what you are allowed to do is the main reason not to use ARM. RISC-V is not about cost; it is about control. ARM suing Qualcomm to stop Elite X should be everything the industry needs to know to choose RISC-V wherever possible.
If you are going to launch a chip for yourself (like Apple did with Apple Silicon) or Amazon did with Graviton, I would choose RISC-V over ARM if starting today. That is what Tenstorrent did for their platform. I can see NVIDIA releasing their own RISC-V chip.
In the case of AMD, what are their customers asking for? Probably not RISC-V at this point (sadly). So ARM makes a lot of sense for them.
To get back to the original suggestion, replacing the ARM decoder in SoundWave with a RISC-V one, I do not know how feasible that is in practice. The entire chip is designed around the ISA, especially registers and memory model. It is not like compiling Kotlin instead of Java. Or rather, it could be like that if both ARM and RISC-V instructions were designed to compile down to the same micro-architecture (but they are not).
I want a hybrid APU, perhaps an x86 host with ARM co-processors that can be used to run arm64 code natively/do some clever virtualization. Or maybe the other way around, with ARM hosts and x86 co-processors. Or they can do some weird HMP stuff instead of co-processors.
Why have both to run native arm64 code? Nearly anything you'd want is cross compiled/compilable (save some macOS stuff but that's more than just CPU architecture).
My understanding is that ARM chips can be more efficient? Hence them being used in phones etc.
I guess it would let you run android stuff "natively"?
Or perhaps you imagine running Blender in x64 mode and discord in the low wattage ARM chip?
Or put differently, why bake the CPU instruction sets into the chips? What Apple has shown is that emulating x86 can actually rival or be faster than a natively running x86 chip. There are currently two major ones (ARM, x86) and an up-and-coming minor one (e.g. RISC-V), and lots of legacy ones (SPARC, MIPS, PowerPC, etc.). All these can be emulated. Native compilation is an optimization that can happen at build time (traditional compilers), at distribution time (Android stores do this), just before the first run (Rosetta), or on the fly (QEMU).
Chip manufacturers need to focus on making power-efficient, high-performance workhorses. Apple figured this out first and got frustrated enough with Intel, who was more preoccupied with vendor lock-in than with doing the one thing they were supposed to do: developing best-in-class chips. The jump from x86 to M1 completely destroyed Intel’s reputation on that front. Turns out all those incremental changes over the years were them just moving deck chairs around. AMD was just tagging along and did not offer much more than them. They too got sidelined by Apple’s move. They never were much better in terms of efficiency and speed. So them now maybe getting back into ARM chips is a sign that times are changing and x86 is becoming a legacy architecture.
This shouldn’t matter. Both Apple and Microsoft have emulation capability. Apple is of course retiring theirs, but that’s more of a prioritization/locking strategy than it is for technical reasons. This is the third time they’ve pulled off emulation as a strategy to go to a new architecture: Motorola 68000 to PowerPC to x86 to ARM. Emulation has worked great for decades. It has broken the grip X86 has had on the market for four decades.
> Or put differently, why bake the CPU instruction sets into the chips?
There is more to a CPU instruction set than just instruction encodings. For instance, x86 has flags which are updated (sometimes partially) by a lot of instructions, and a stronger memory model (TSO), while RISC-V has its own peculiar ideas on the result of an integer division by zero.
> What Apple has shown is that emulating x86 can actually rival or be faster than a natively running x86 chip.
AFAIK, Apple has special support in its processors for emulating x86. It has a hardware mode which emulates the x86 memory model, and IIRC also has something in hardware to help emulate the x86 flags register.
> It has a hardware mode which emulates the x86 memory model, and IIRC also has something in hardware to help emulate the x86 flags register.
AFAIK, the memory model is the main missing piece. And it seems like it's certainly something that could be implemented separately. IMO, it's something the ARM group could (and probably should) easily add into the platform.
The flags register is a minor thing that's pretty easy to pull off. Most of the x86 instructions that mess with the flags have direct ARM instructions. The ones that don't can easily be emulated by burning a register and maintaining the flags in said register when needs be.
I think the other important thing to note is that while x86 has a wealth of exotic functions that do wild things, a lot of those instructions aren't generated by any modern compiler. Not saying you can't find a stray `ENTER`/`LEAVE` instruction in old software, it's just not likely. That significantly cuts down on weird instructions doing wild things harming performance.
A quick google about what apple did to support x86 flags is they added undocumented bits to their own flags register to help support 8080 instructions.
Now to do speculation on top of speculation on top of speculation: Valve's next vr headset deckard / steam frame is also rumored to be using an ARM chip, and with them being quite close with AMD since the steam deck custom APU (although that one was apparently just something originally intended for magic leap before that fell apart), this could be in there + be powerful enough to run standalone VR.
Could be an interesting chip for a future Raspberry Pi model? With Radeon having nice open source drivers, it would be easy to run a vanilla Linux OS on it. The TDP looks compatible as well.
personally i totally understood why AMD gave up on its last attempt - the A1100 opterons - about 10 years ago in favor of the back then new ryzen architecture:
but what i would really like to see: an ARM soc/apu on an "open"*) (!) hardware-platform similar to the existing amd64 pc hardware.
*) "open" as in: i'm able to boot whatever (vanilla) arm64 linux-distribution or other OS i want ...
i have to add: i'm personally offended by the amount of tinkering of the firmware/boot-process which is necessary to get for example the raspberry pi 5 (or 4) to boot vanilla debian/arm64 ... ;)
br,
a..z
ps. even if its a bit o.T. in this context, as a reminder a link to a slightly older article about an interview with jim keller about how ISA no longer matters that much ...
Some people, for some strange reason, want to endlessly relitigate the old 1980'ies RISC vs CISC flamewars. Jim Kellers interview above is a good antidote for that. Yes, RISC vs CISC matters for something like a simple in-order core you might see in embedded systems. For a big OoO core, much less so.
That doesn't mean you'd end up with x86 if you'd design a clean sheet 'best practices' ISA today. Probably it would indeed look something like aarch64 or RISC-V. So certainly in that sense RISC won. But the win isn't so overwhelming that it overcomes the value of the x86 software ecosystem in the markets where x86 plays.
I don't think I'm using x86 for anything anymore. All the PC's in my home are ARM, the phones are ARM, the TV's are ARM and even the webservers I'm running are ARM nowadays.
I always wonder why nobody have never released a framework mainboard with rockchip. There is even one with a - very - slow RISC-V for OS developer FFS.
It would have to be integrated with other components in the laptop. The RISC-V mainboard was not made by DeepComputing alone. For Framework it was intended as a pilot study in these kinds of partnerships.
there are two predominant architectures right now (right or wrong), amd64 and arm64. Why the F would amd invest in risc when their gpus are well above intel in specs and explain the biz market approach for risc...
> Memory support is another highlight: the chip integrates a 128-bit LPDDR5X-9600 controller and will reportedly include 16 GB of onboard RAM, aligning with current trends in unified memory designs used in ARM SoCs. Additionally, the APU carries AMD’s fourth-generation AI engine, enabling on-device inference tasks
128-bit LPDDR5X-9600 is about 150 GB/s, that's 50% better than an Orin NX. If they can sell these things for less than like $500 then it would be a pretty decent deal for edge inference. 16 GB is ridiculously tiny for the use case though when it's actually more like 15 in practice and the OS and other stuff then takes another two or three, leaving you with like 12 maybe. Hopefully there's a 32 GB model eventually...
Wow. This could really be a big deal, especially if it’s more of an openly available product than what Qualcomm has on offer.
For me personally I’d love it if this made it to a framework mainboard. I wouldn’t even mind the soldered memory, I understand the technical tradeoff there.
That will probably happen eventually, but right now RISC-V only has the hp for embedded or peripheral uses. It will continue to nip at ARM’s heels for the next 5-10 years.
Seems there's an awful lot of slideware in the high-performance RISC-V department. We'll see when/if the rubber hits the road, I suppose.
Longer term, I think the future looks bright for RISC-V. If nothing else, at least the Chinese are investing heavily into it, for obvious reasons wrt avoiding sanctions and such.
AMD has been making ARM chips for a long time (they bought Xilinx and have been an ARM licensee for forever). This is just their first APU (graphics plus cpu) with an ARM core as the CPU.
https://www.amd.com/en/products/adaptive-socs-and-fpgas/soc....
Maybe it would be better for them to enter the Risc-V market. My working thesis is that SoftBank had milked the Arm cow dry. That is why they took it public to pawn the dregs off onto the retail investor. Paying for an Arm license is wasted money and akin to paying for an OS these days.
> Maybe it would be better for them to enter the Risc-V market
The demand isn't there for the RISC-V product. AMD is exploring this space[1][2] but they aren't bringing them to market because sufficient demand isn't there yet.
1. https://www.amd.com/en/products/software/adaptive-socs-and-f...
2. https://www.amd.com/en/products/software/adaptive-socs-and-f...
> The chip is expected to power future Microsoft Surface products scheduled for release in 2026.
They would have to persuade MS to create Windows for RISC-V in this case.
This is a "why not both" strategy: ARM has the market share whereas the RISC-V ecosystem is still being built up. Once you have a RISC based chip, it's not nearly as much work to change to another RISC ISA.
Arm IP cores are catching up to Apple silicon now with Arm-C1
Legendary Chip Architect, Jim Keller, Says AMD ‘Stupidly Cancelled’ K12 ARM CPU Project After He Left The Company: https://wccftech.com/legendary-chip-architect-jim-keller-say...
Could be a revival but for different purposes
I don't think AMD had the money to properly execute on both Zen and this K12 ARM chip. So they chose the more safer bet of Zen which seems to have worked out really well for them.
Funny how some of his projects got cancelled like K12 at AMD or Royal Core at INTC and people always act like that was terrible decision, yet AMD is up like 100x on stock market and INTC... times gonna tell
Seems completely uncorrelated with what is discussed especially considering Intel didn’t enter the ARM market either.
Would make much more sense to compare with Qualcomm trajectory here as they dominate the high end ARM SoC market.
Basically AMD missed the opportunity to be first mover on a market which is now huge with a project Apple proved to be viable three years after the planned AMD release. Any way you look at it, it seems like a major miss.
The fact that other good decisions in other segments were made at the same time doesn’t change that.
> Basically AMD missed the opportunity to be first mover on a market which is now huge with a project Apple proved to be viable three years after the planned AMD release. Any way you look at it, it seems like a major miss.
I don't think this is a fair position. It could as well be that focusing in K12 would have delayed Zen, maybe delaying it enough that it could have become irrelevant by the time it got to market.
Remember that while Zen was a good CPU, the only reason it made as much impact as it did was because it also was released in a good time (when Intel was stumbling with 10nm and releasing Skylake refresh after Skylake refresh).
AMD was a pretty stripped down company at that point. They'd bet it all on Zen so when it got a foothold it made sense to double down on it until they could recover.
The thing about being broke is you may know about good opportunities but not have the resources to actually make use of them.
>> It could as well be that focusing in K12 would have delayed Zen, maybe delaying it enough that it could have become irrelevant by the time it got to market.
Agree. AMD stock was under $2 prior to Zen. Buying was a bet that Zen would be competitive with Intel in which case the stock would come back, otherwise they were doomed. The first Zen chips were in fact competitive but beat Intel in some benchmarks and lost in others. That would have brought back competition, but who knew Intel would flounder for many more years while Zen got a nice uplift with each generation! Delaying Zen would have been bad for AMD, but in hindsight that wouldn't have mattered so long as they could stay afloat til it launched.
>Basically AMD missed the opportunity to be first mover on a market which is now huge with a project Apple proved to be viable three years after the planned AMD release. Any way you look at it, it seems like a major miss.
No man, apple basically had the power to frog march it's app devs to a new cpu arch. That absolutely would not have happened in the windows ecosystem given the amount of legacy apps and (arguably more importantly) games. For proof of this you need look no further than Itanium and windows arm
Even moreso their hardware buyers.
If most Intel hardware makers had gone full ARM, they would simply have lost market share. Apple customers are going to buy Apple hardware—whatever it has inside.
But of course Apple controls not just the hardware but the OS. So ya, if only Apple hardware will run your application, you are going to port to that hardware.
Apple has a massive advantage in these transitions for sure.
> apple basically had the power to frog march it's app devs to a new cpu arch
Microsoft's ARM transition execution has been poor.
Apple's Rosetta worked on day one.
Microsoft's Prism still has some issues, but at release its compatibility with legacy x86 software was abysmal.
Apple's first party apps and developer IDE had ARM versions ready to go on day one.
Not so for Microsoft.
Apple released early Dev Kit hardware before the retail hardware was ready to go (at very low cost).
Microsoft did not.
Microsoft already had an example of how to do this in a reasonable fashion. Not only that, but the original developer was an ARM licensee. And then finally, during that era Windows was still being developed for multiple architectures.
https://en.wikipedia.org/wiki/FX!32
Apple and Qualcomm _have_ to use Arm ISA because they don't have x86 license. Apple would have likely stayed on x86 if they could use it in their in-house designs. Intel wouldn't issue x86 license to Qualcomm or Apple, of course.
Very unlikely. Moving to Arm allowed Apple to have a single architecture across all their hardware and leverage iPhone designs as the starting point for Mac SoCs.
If Intel would have licensed x86 for use in Apple’s own finished computers, Intel would be in a way better position. Foolish not just to lose that customer but also to legitimize ARM as a desktop and high-end option.
I think Apple would have switched anyway though. They designed Apple Silicon for their mobile devices first (iPhone, iPad) which I doubt they would have made x86. The laptops and desktops are the same ISA as the iPhone (strategically).
There was a rumour on here that aarch64 was actually designed by Apple and given to ARM to standardize
It’s a false rumour. Arm has decades of ISA design experience and their chief architect has talked about designing Aarch64.
Sure they Apple and Arm worked together but it wasn’t developed by Apple and given to Arm.
If AMD released a desktop class ARM processor at that time, what software would it have run?
Apple had already switched cpus in Macs twice, it's not surprising that they could do it again, but would they have switched from Intel x86 to AMD ARM when they never used any AMD x86? Seems unlikely.
Focusing on a product that would sell on day one rather than one that would need years to build sales makes sense for a company that was struggling for relevance and continued operations.
Apple has way stronger leverage than AMD when it comes to forcing "new standards" lets say.
AMD cannot go and tell its customers "hey we are changing ISA, go adjust.". Their customers would run to Intel.
Apple could do that and forced its laptops to use it. Developers couldnt afford losing those users, so they adjusted.
It’s a chicken and egg problem.
Nobody supports the new ISA because there is no SoC and nobody makes the new SoC because there is no support. But in this case, that’s not really true because Linux support was ready.
More than forcing volumes, Apple proved it was worth it because the efficiency gains were huge. If AMD had release a SoC with numbers close to the M1 before Apple targeting the server market, they had a very good shot at it being a success and leveraging that to success in the laptop markets where Microsoft would have loved to have a partner ready to fight Apple and had to wait for Qualcomm for ages.
Anyway, I stand that looking at how the stock moved tells us nothing about if the cancellation was a good or a bad decision.
>More than forcing volumes, Apple proved it was worth it because the efficiency gains were huge. If AMD had release a SoC with numbers close to the M1 before Apple targeting the server market, they had a very good shot at it being a success and leveraging that to success in the laptop markets where Microsoft would have loved to have a partner ready to fight Apple and had to wait for Qualcomm for ages.
Apple proved that creating your own high end consumer SoC was doable and viable idea due to TSMC and could result in better chips due to designing them around your needs.
And which ISA they could use? X86? Hard to say, probably no. So they had RISCV and ARM
Also about Windows...
If PantherLake on 18A actually performs as good as expected, then why would anyone move to ARM on Windows when viable energy eff. cpus like lnl and ptl are available
> If PantherLake on 18A actually performs as good as expected, then why would anyone move to ARM on Windows when viable energy eff. cpus like lnl and ptl are available
Well yes, exactly, that’s the issue with arriving 10 years later instead of being first mover. The rest of the world doesn’t remain unmoving.
> Apple proved it was worth it because the efficiency gains were huge
Thing is, those efficiency gains are both in hardware and software.
Will a Linux laptop running the new AMD SoC use 5 W while browsing HN like this M3 pro does?
5W while browsing is already less efficiënt than my old laptop with a Zen 2 CPU (and most of the power is consumed by the display). Newer CPU's or SoC's should do quite a bit better than that.
During "light" browsing pretty much any laptop's power use is massively dominated by things that aren't the CPU, assuming there's been any attempt at enabling that use case (which doesn't always seem to be the case for many SKUs, certainly on the cheaper end).
A huge amount of Apple's competitive edge is in the "other 90%", but they don't seem to get the headlines.
It's good that x86 is coming close.
Does Windows have working sleep now? I hear it's dangerous to throw a wintelmd laptop in a backpack without shutting it down.
did your laptop display have the same brightness and pixel density
Steam Deck does about 8w.
Isn't the Deck x86 though?
> Their customers would run to Intel.
Data centers and hosting companies are probably the biggest customers buying AMD CPUs, no?
If those companies could lower their energy and cooling costs that could be a strong incentive to offer ARM servers.
What kind of difference we are talking about?
1% 3% 6% 10% 30%?
No idea but it should be significant. AFAIK cooling and energy are the biggest data center costs.
Except they literally did exactly that with x86-64 so I’m confused by your comment.
Isn't x86 64 backward compatible, so that's fine?
Intel specifically exited the general-purpose ARM market back about 20 years ago when it sold its XScale division to Marvell. I believe it kept making the ARM chips for use in network controllers and other specific purpose chips.
Intel failed to anticipate the smartphone revolution despite RIM being a customer of XScale. To be fair, they only entered because they got StrongARM from a law suit settlement with DEC in 1997 and they sold to refocus on more strategic segments which turned out to be actually a lot less interesting. I don’t think Intel can really be seen as a model of good strategic thinking.
But all of this is a decade before what we are discussing here. I didn’t even remember XScale existed at Intel while writing my first comment.
Allen Baum has some inside baseball on this:
https://youtu.be/wN02z1KbFmY?si=Gnt4DHalyKLevV2pFrom 2:03:30 he points out that the only purpose of the DEC lawsuit was to facilitate the sale to Compaq without the microelectronics group.
Intel has made and killed ARM socs since then. Like Keem Bay.
>> Seems completely uncorrelated with what is discussed especially considering Intel didn’t enter the ARM market either.
I don't think AMD should be following Intel in markets outside x86. I want to see them go RISC-V with a wide vector unit. I'd like to see Intel try that too, but they're kind of busy fixing fabs right now.
>market which is now huge
SoC market is mcdonalds. its huge in the same way the soybean industry is huge. zero margin commodity.
Yeah, sure, remind me what were Qualcomm results last year. 10 billions?
But, don't get me wrong, I wouldn't spit on McDonalds 6 billions either and the soybean market is one of the fastest growing in the agrifood business, with huge volume traded, probably one of the most profitable commodity at the moment.
> Yeah, sure, remind me what were Qualcomm results last year. 10 billions?
How much of Qualcomm's profit comes from providing yet another ARM chip vs. all the value-added parts they provide in the ARM SoC's, like all the radio modem stuff necessary for mobile phones?
Now that's kind of a rhetorical question, not sure a clear answer exists, at least not outside Qualcomm internal finance figures. Food for thought, though.
(That's sort of the logic behind RISC-V as well. The basic ISA and the chip that implements it is a commodity, the value comes from all the application specific extra stuff tacked on to the SoC.)
> Seems completely uncorrelated with what is discussed especially considering Intel didn't enter the ARM market either.
Maybe the folks at Intel just didn't want to StrongARM their competitors?
is stock up because of them or despite them?
In the case of AMD it's definitely because of them and the great leadership from Lisa Su.
AMD is doing well because they moved on chiplets before Intel did. The decision of ARM vs x86 is pretty much unrelated to the move that saved them, and sticking with the architecture with which they had decades of experience was probably a good idea.
I mean Keller is talking about a decision to not pursue an ARM chip that he’d apparently been working on after(?) Zen 2 (or maybe in parallel). So AMD was already back on a good path at that point.
It is hard to evaluate it reliably.
Stock valuation is a horrible measure of how well a company has planned for the future. Time has demonstrated this again and again and again.
But in this context that future is now and AMD is way better than it was around 2014-2017.
Cult of personality... or maybe people just want cool stuff for fun.
Keller himself credits the many people responsible for the contributing parts [0]. I think the general 'enthusiast' tech press and reporting likes hero figures and the simplicity it brings, even better when you can cast a good guy against a bad guy, and the background in this case would be AMD vs intel.
[0] https://web.archive.org/web/20210622032535/https://www.anand...
Humans inherently like having a narrative. When we discuss historical events, we typically want to have a clearly defined leader and/or visionary upon whom to pin events. Without this, our imaginations aren't as engaged, and therefore emotions aren't stirred. For example, the stories of early game companies are great because the teams were very small, a narrative can be written, and the product was fun. With modern games, budgets are massive, teams are massive, and things are often designed and approved by committee. The result can be beautiful and fun, but the story getting there isn't as entertaining.
He was probably right from a technical perspective. Maybe not from a business one.
I believe Jim Keller is now working on RISC-V which could take the server market by storm in the next 5 years or so.
There are already RISC-V server offerings:
https://labs.scaleway.com/en/em-rv1/
AMD was being pummeled by Intel during the time Jim was there and was only the giant we know today when Jim left. AMD did this, mostly, by x64 server market. So AMD did what it could to get around intel's and apple's monopolies. Intel on the server end came in at a higher price, and often worse performance or more power usage, or both. I'm not sure who AMD would have sold ARM to in 2010's. Apple didnt want their product and made their own, cell phone companies were cozy with the established ARM vendors, MS wouldnt launch an actual ARM laptop for years, data centers didn't want it, etc.
Look at intel's various arm or embedded offerings it keep canceling. It can't find buyers. Qualcomm and Samsung other vendors just keep eating up sales in ARM.
Now I imagine AMD sees ARM servers as the future and wants to make sure not to be left behind, on top of ARM desktop/laptop and further embedded.
I think this mostly a sign the world is now moving away from the old x86/64 system that ruled technology for so long. AMD is needs to stay competitive here.
He also left AMD 10-years ago (2015).
https://en.wikipedia.org/wiki/Jim_Keller_(engineer)
Consider that Amd was not far from bankruptcy. They couldn't even execute on their gpu chips consider that they were the duopoly with nvidia and mostly missed the ai wave. Do they even have the capacity to work on arm on top of that?
It is easy to talk after the fact. The giant market opener for ARMs was Apple, many times in business it is better to he a follower once a big market arises.
Anybody else finds it very confusing that this is called Sound Wave and it's not a specific chip for sound synthesis applications?
From the name you'd expect a simple sound card, but look deeper and there is more than meets the eye [1]
[1]: https://en.wikipedia.org/wiki/Soundwave_(Transformers)
I was hoping it'd be a very cool soundcard, perhaps with unlimited General Midi channels.
If somebody ever buys up the Gravis Ultrasound name, you’ll know things are about to get wild.
10^5 orchestra hit polyphony.
Finally a realistic helicopter sound?
what is the reference here, out of curiosity
Not sure what their intention is of course, but nowadays there is A LOT of Cortexes in various sound gear. Plenty in things like Eurorack but also outboard equipment like the Eventide H9000 etc.
Perhaps it is named after the Decepticon?
A common trend in audio systems is that the market cap is too small for economies of scale when it comes to commodity parts like processors. There are a handful of audio-specific chips that are common but processors are not one of them (any more).
I mean, "Coffee Lake" does not make much sense either.
Yeah. I imagine heaven with a lake full of fresh, delicious, and aromatic coffee... instead, there's just... a 14nm part (surprise!) on socket 1151.
I can't even remember how many plusses into 14nm we were when Coffee Lake dropped.
Better (or simply more) ARM processors, no matter who makes them, are a win. They tend to be far more power-efficient, and with performance-per-watt improving each generation, pushing for wider ARM adoption is a practical step toward lowering overall energy consumption.
With the caveat that ARM isn't a industry standard like PC has become, thus while propritary OSes can thrive, FOSS has a much higher challenge other than OEM specific distros or downstream forks.
Stuff like this, https://www.amazon.de/-/en/Microsoft-Corporation/dp/15723171...
There are the Arm SystemReady and ServerReady requirements/specifications that enable generic board support by the OSes.
Thanks, I thought we were still on device trees and little else.
Practically speaking, very few systems actually support SystemReady. There's an experimental port of edk2 for the Raspberry Pi, but some hardware is unavailable when using it.
The ARM server platforms seem quite decent here? But yeah, pick any small dev board and I suspect it looks quite different.
Are ARM processors inherently power efficient? I doubt.
Performance per watt is increasing due to the lithography.
Also, Devon’s paradox.
They aren't inherently power efficient because of technical reasons, but because of design culture reasons.
Traditionally x86 has been built powerful and power hungry and then designers scaled the chips down whereas it's the opposite for ARM.
For whatever reason, this also makes it possible to get much bigger YoY performance gains in ARM. The Apple M4 is a mature design[0] and yet a year later the M5 is CPU +15% GPU +30% memory bandwidth +28%.
The Snapdragon Elite X series is showing a similar trajectory.
So Jim Keller ended up being wrong that ISA doesn't matter. Its just that it's the people in the ISA that matter, not the silicon.
[0] its design traces all the way back to the A12 from 2018, and in some fundamental ways even to the A10 from 2016.
As far as I know people aren't part of ISA :)
People are absolutely part of an ISA's ecosystem. The ISA is the interface between code and CPU, but the code is generally emitted by compilers, and executed in the context of runtimes and operating systems, all designed by people and ultimately dependent on their knowledge of and engagement with the ISA. And for hot code in high-performance applications, people will still be writing directly in assembler directly to the ISA.
ISA != ISAs ecosystem
ISA is just ISA
But you get the Environment for free if you choose the ISA, so ISA=>ISA ecosystem. It really does matter when making a decision
You’re conveniently skipping the part where x86 can run software from 40 years ago but arm can drop entire instruction sets no problem (eg: jazelle).
Had been arm so weighted by backwards compatibility i doubt it would be so good as it is.
I really think intel/amd should draw a line somewhere around late 2000 and drop compatibility with stuff that slow down their processors.
> jazelle
That’s a blast from the past; native Java bytecode! Did anyone actually use that? Some J2ME phones maybe? Is there a more relevant example?
Do you have any actual evidence for that? Intel does care about power efficiency - they've been making mobile CPUs for decades. And I don't think they are lacking intelligent chip designers.
I would need some strong evidence to make me think it isn't the ISA that makes the difference.
https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter
Basically, x86 uses op caches and micro ops which reduces instruction decoder use, the decoder itself doesn't use significant power, and ARM also uses op caches and micro ops to improve performance. So there is little effective difference. Micro ops and branch prediction is where the big wins are and both ISAs use them extensively.
If the hardware is equal and the designers are equally skilled, yet one ISA consistently pulls ahead, that leads to the likely conclusion that the way the chips get designed must be different for teams using the winning ISA.
For what it's worth, the same is happening in GPU land. Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W (!).
That same M1 also smoked an Intel i9.
> Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W
That's not true.
ARM doesn't use micro-ops in the same way as x86 does at all. And that's not the only difference, e.g. x86 has TSO.
I'm not saying the skill of the design team makes zero difference, but it's ludicrous to say that the ISA makes no difference at all.
The claims about the M1 Ultra appear to be marketing nonsense:
https://www.reddit.com/r/MachineLearning/comments/tbj4lf/d_a...
Isn't Lunar Lake first mobile chip with focus on energy eff? And it is reasonably efficient
We will see how big improvement is it's successor panther lake in January on 18A node
>I would need some strong evidence to make me think it isn't the ISA that makes the difference.
It is like saying that Java syntax is faster than C# syntax.
Everything is about the implementation: compiler, jit, runtime, stdlib, etc
If you spent decades of effort on peformance and ghz then don't be shocked that someone who spent decades on energy eff is better in that category
> Isn't Lunar Lake first mobile chip with focus on energy eff?
Not by a long shot.
Over a decade ago, one of my college professors was an ex-intel engineer who worked on Intel's mobile chips. He was even involved in an Intel ARM chip that ultimately never launched (At least I think it never launched. It's been over a decade :D).
The old conroe processors were based on Intel's mobile chips (Yonah). Netburst didn't focus on power efficiency explicitly so and that drove Intel into a corner.
Power efficiency is core to CPU design and always has been. It's easy create a chip that consumes 300W idle. The question is really how far that efficiency is driven. And that may be your point. Lunar Lake certainly looks like Intel deciding to really put a lot of resource on improving power efficiency. But it's not the first time they did that. The Intel Atom is another decades long series which was specifically created with power in mind (the N150 is the current iteration of it).
Actually, if you had made an opposite example, it might have gone against your point. ;) C# gives you a lot more control over memory and other low-level aspects, after all.
That’s semantics though, not syntax. What’s holding Java performance back in some areas is its semantics.
It might be the same with x86 and power-efficiency (semantics being the issue), but there doesn’t seem to be a consensus on that.
Yet how much perf in recent dot nets comes from that, and how much comes from "Span<T>"ning whole BCL?
There’s much more to it than just Span<T>. Take a look at the performance improvements in .NET 10: https://devblogs.microsoft.com/dotnet/performance-improvemen.... When it comes to syntax, even something like structs (value types) can be a decisive factor in certain scenarios. C# is fast and with some effort, it can be very fast! Check out the benchmarks here: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
I know that C# is fast, this is my favourite lang, but it is hard to say honestly which one is faster
I love the saying "i dont trust benchmarks that i didn't fake myself"
> It is like saying that Java syntax is faster than C# syntax.
Java and C# are very similar so that analogy might make sense if you were comparing e.g. RISC-V and MIPS. But ARM and x86 are very different, so it's more like saying that Go is faster than Javascript. Which... surprise surprise it is (usually)! That's despite the investment into Javascript implementation dwarfing the investment into Go.
Intel has made SOC designs with power efficiency very, very close to M series. Look at lunar lake and compare it to what was available at the time.
According to an AMD engineer I asked at the time, when they evaluated Ryzen/K12, it was "maybe" a 15% advantage for ARM depending on scenarios.
The efficiency came solely from the frontend which is a lot heavier on x86, and stay up longer because decoding is way more complex. The execution units were the same (at least mostly, I think, might be misremembering) so once you are past the frontend there's barely any difference in power efficiency.
Aside from lithography there's clever design. I don't think you can quantify that but it's not nothing.
Actually power efficiency was a side effect of having a straightforward design in the first ARM processor. The BBC needed a cheap (but powerful) processor for the Acort computer and a RISC chip was When ARM started testing their processor, they found out it draw very little power...
... the rest is history.
You're getting your history mixed up.
Acorn won the bid to make the original BBC home computer, with a 6502-based design.
Acorn later designed their own 32-bit chip, the ARM, to try to leapfrog their competitors who were moving to the 68000 or 386, and later spun off ARM as a separate company.
The BBC Micro had a 6502
Yes they are. RISC philosophy apart from instruction set is also low gate count (so less energy used).
Nvidia can design super clean solution fron scratch - i can bet 50$ that its gonna be more efficient in MIPS/watt
Most of the gates on a cpu are not instruction decoding.
ISA is not that relevant, it is all about what you want to achieve with your CPU
For a CPU vendor, ISA is very relevant: most buyers will start their buying decision with ISA choice already fixed, and a vendor who can't offer a CPU with that ISA simply isn't in the race.
It does not matter whether you are a believer in horses for courses when it comes to ISA, or a believer in "frontend ISA does not matter because it's all translated away anyways": when buyers don't want what you have, you are out. And buyers are more like a stampeding herd than like rational actors when it comes to ISA choice. I'd see offering CPU for multiple ISA as an important hedge against the herd changing direction.
The context is: ISA's peformance and efficiency characteristics
What AMD wants to achieve with their CPU: sell them, preferably at a nice profit. If ISA is truly not relevant for performance and efficiency characteristics, all the more reason for them to not bet on any particular ISA but spread out, to already be there wherever the buying herd goes.
Meh, performance-per-watt is not what everybody wants. I only want it in that it affords more raw performance by allowing more watts to be pumped through it without thermal overload. But if that can't actually happen then I'm still more interested in x86. Sure the lights dim when I turn my PC on, but I want the performance.
How is running desktop Linux on these?
I run desktop linux via postmarketOS on a Lenovo Duet 5 (Snapdragon 7c). It isn't the most powerful device and the webcam doesn't work but other than that it works well and battery life is excellent
> the webcam doesn't work
But.. ..why? Of all things, I would have expected the webcam to not be cpu-related..
IIRC, it's because the ARM designs tend to use camera modules that come from smartphone-land.
Cameras used on x86-64 usually just work using that usb webcam standard driver (what is that called again? uvcvideo?). But these smartphone-land cameras don't adhere to that standard, they probably don't connect using USB. They are designed to be used with the SoC vendor's downstream fork of Android or whatever, using proprietary blobs.
A similar thing is happening in Intel land recently, where the cameras use ipu6 / ipu7 chips rather than dumping simple frames over USB. But this way we get a higher resolution / quality at least.
It’s usually MIPI or some variant. There’s probably a way to enable the video stream but you also have to talk to the control module itself which is on a different bus.
BTW. ChipsAndCheese has a recent article on MALL / Infinity Caches, evaluating it in the x86-based AMD Strix Halo APU:
https://chipsandcheese.com/p/evaluating-the-infinity-cache-i...
My guess from previous reporting on this, it was an experiment that might not ever be released.
ARM isn't nearly as interesting given the strides both Intel and AMD have made with low power cores.
Any scenario where SoundWave makes sense, using Zen-LP cores align better for AMD.
AMD makes laptop CPUs with good performance per power consumption ratio, but they are designed for high power consumptions, typically for 28 W, or at least for 15 W.
AMD does not have any product that can compete with Intel's N-series or industrial Atom CPUs, which are designed for power consumptions of 6 W or of 10 W and AMD never had any Zen CPU for this power range.
If the rumors about this "Sound Wave" are true, then AMD will finally begin to compete again in this range of TDP, a market that they have abandoned many years ago (since the AMD Jaguar and Puma CPUs), because all their resources were focused on designing Zen CPUs for higher TDPs.
For cheap and low-power CPUs, the expensive x86-64 instruction decoder may matter, unlike for bigger CPUs, so choosing the Aarch64 ISA may be the right decision.
Zen compact cores provide the best energy efficiency for laptops and servers, especially for computation-intensive tasks, but they are not appropriate for cheap low-power devices whose computational throughput is less important than other features. Zen compact cores are big in comparison with ARM Cortex-X4, Intel Darkmont or Qualcomm cores and their higher performance is not important for cheap low-power devices.
> AMD does not have any product that can compete with Intel's N-series or industrial Atom CPUs, which are designed for power consumptions of 6 W or of 10 W and AMD never had any Zen CPU for this power range
A cursory search shows that the AMD APU used in the Valve Steam Deck draws 3-15W. Limiting the TDP to 6W on a Steam Deck is fine for Linux in desktop mode.
That is a custom APU made by AMD for a certain customer.
It is not a device that AMD sells on the open market, so it does not compete with the ubiquitous Intel N-series CPUs or with the Arm-based CPUs from various vendors.
Like I have said, since Jaguar and Puma, which are older than the first Zen, AMD has never sold on the open market any CPU/APU designed for a TDP of 10 W or less.
While for some AMD APUs, like Ryzen Z1, which are designed for a TDP of 15 W, their specification says that they have a TDP that is configurable down to 9 W, when such CPUs are configured for a lower TDP than they are optimized for, they become inefficient, by having a bigger die area, i.e. a higher cost, and a lower energy efficiency, in comparison with the CPUs that have been specifically designed for that lower power.
The page this article got its info from (https://www.ithome.com/0/889/173.htm) says (according to Safari’s translation):
“IT Home News on October 13, @Olrak29_ found that the AMD processor code-named "Sound Wave" has appeared in the customs data list, confirming the company's processor development plan beyond the x86 architecture”
I think that means they are planning to export parts.
I think there still is some speculation involved as to what those parts are, and they might export them only for their own use, but is that likely?
I think the chips are being imported from Taiwan to the US. At this point they are prototypes being tested. AMD wouldn't make a chip this complex for their own use; these were likely ordered by Microsoft or Valve.
Valve seems like a reasonable guess honestly.
It is interesting for AMD because having a on-par ARM chip means they can keep selling chips when the rest of the market switch to ARM. This is largely driven by Apple and by the cloud providers wanting more efficient higher density chips.
Apple isn’t going to switch back to AMD64 any time soon. Cloud providers will switch faster if X64 chips become really competitive again.
I am not sure if cloud providers want ARM - the most valuable resource is rack space, so you want to use the most powerful CPU, not the one using less energy.
> the most valuable resource is rack space
The limit is power capacity and quite often thermal. Newer DCs might be designed with larger thermal envelopes, however rack space is nearly meaningless once you exhaust thermal capacity of the rack/isle.
Performance within thermal envelope is a very important consideration in datacenters. If a new server offers double performance at double power it is a viable upgrade path only for DCs that have that power reserve in the first place.
Well, Amazon does offer Graviton 4 (quite fast and useful stuff) along side their Epyc machines so there is some utility to them. A 9654 is much faster than a Graviton 4.
EDIT: Haha, I was going off our workloads but hilariously there are some HPC-like workloads where benchmarks show the Graviton 4 smoking a 9654 https://www.phoronix.com/review/graviton4-96-core/4
I suppose ours must have been more like the rest of the benchmarks (which show the 9654 faster than the Epyc).
Cooling takes up rack space, too. There also are workloads that aren’t CPU constrained, but GPU or I/O constrained. On such systems, it’s better to spend your heat budget on other things than CPUs.
> the most valuable resource is rack space
I've always heard it's cooling capacity. I'm also pretty confident that's true
AWS, Google, Hetzner all offer a discount if you use an ARM64 VPS.
Clearly, they want them, because there's demonstrated power savings.
Rack space limits include power limits. E.g. 10kw per rack.
> given the strides both Intel and AMD have made with low power cores.
Any pointers regarding that? How does the computing power to watts ratio look these days across major CPU architectures?
cough gaming device
Sounds like a PERFECT chip for my next HomeAssistant box :-D
- Low power when only idling through events from the radio networks
- Low power and reasonable performance when classifying objects in a few video feeds.
- Higher power and performance when occasionally doing STT/TTS and inference on a small local LLM
My thoughts exactly! Although I may end up getting some Mini M1/M2 variant with Asahi Linux instead
Oh I hope the price is low enough that this be a real media box chip competitor fir streaming devices. Nvidia Shield Tegra chip from 2015 is still one of the best in this space. And with nvidia making all the AI money is not interested in making a new device. Apple TV the only real alternative does not support audio passthrough so is not as open as android or Linux media boxes.
I think Amlogic, Mediatek and Qualcomm all have SoC which are significantly better than the Tegra for this use case. It’s just that the market barely exists as most consumers use their tv directly so no one really wants to make a media box anymore.
Yeah, whether AMD is willing to go after some market low end has long been TBD. Intel's N100/N97/N150 is everywhere & very affordable, seemingly, based off some of the system prices. AMD doesn't have anything remotely like it.
The chip here is an interesting mix. Fast ddr5-9600! But less GPU CU's than most APUs: 4 down from 6. But if it comes with the other fixings like a good video engine & AMD's very good drivers it could be a real win.
Also a little hopeful that AMD rebadging it's Zen 1 and Zen 2 chips again might possible open up some decent low end space, but Sound Wave with more modern solutions would be a very nice to have power efficient low end.
If AMD releases an APU that is just using off-the-shelf ARM Cortex cores, it will be completely uninteresting and won't matter. Lots of companies have done that. I'd love to see them dedicate an amount of resources to an ARM processor that makes it competitive with their Ryzen x86 line.
While I agree with the general idea, I think the sales pitch is less "a bunch of ARM Cortex cores" and more "a bunch of RDNA2 cores stuck to the same Cortex cores you're used to".
For things like the automotive industry or industrial applications, it could have some sense. Most are high-margin industries ready to commit to specific architectures.
I don't see why Sound Wave would have any advantage, even efficiency, over a similar Zen 5/6 design. Microsoft must really want ARM if they're having this chip made.
It could just be a play to make sure there's a second source to qualcomm
The core count is relatively low though. 2P + 4E, whereas Snapdragon-X are 8 or 10 performance cores, indicating that this could be for a low-end tablet ... or game console?
it does say microsoft surface in the post
They did countless attempts to use ARM but all failed. Consumers didn't care because they couldn't run their software. Microsoft won't solve the problem until they will provide a way to run all relevant software on ARM.
Microsoft already designed a modified ARM ABI [1] compatible with emulated X86-64 just for this transition. But it's a Windows 11 feature. I wonder if the refusal of many of us to switch from Windows 10 is part of the reason why they're still idling on an ARM strategy.
[1]: https://learn.microsoft.com/en-us/windows/arm/arm64ec-abi
Part of the issue was incomplete amd64 emulation on windows which is why several MS products continued to ship 32bit - because while they might recompile their software for ARM, business users had binary-only extensions that they expected to continue using.
A year or two ago I used a Windows 11 laptop with an ARM CPU, and at least for me everything just worked. The drivers weren't as good, but all my x86-64 software ran just fine
Its pretty decent. Decent enough in fact that I can run a Windows 11 ARM install on vmware Fusion on my macbook m4 pro, and it will happily run win arm and x86 binaries (via builtin MS x86 emulation) decently fast and without complaint (we're talking apps, gaming I haven't tried.)
Everything but OpenGL - there is a blender store app that translates GL to DirectX
Apple did an excellent job doing the switch. I don't see why should fail here.
Well, I'm eager to use it. For my home server I use an old power-hungry Epyc 7B13. It's overkill but it can run a lot of things (my blog, other software I use, my family's various pre-configured MCPs we use in Custom GPTs, rudimentary bioinformatics). The truth though is that I hate having to cross-compile from my M1 Mac to the x86_64 server. I would much rather just do an ARM to ARM platform cross-compile (way easier to do and much faster on the Orbstack container platform).
So I went out looking for an ARM-based server of equivalent strength to a Mac Mini that I could find and there's really not that much out there. There's the Qualcomm Snapdragon X Elite which is in only really one actual buyable thing (The Lenovo Ideacentre) and some vaporware Geekom or something product. But this thing doesn't have very good Linux support (it's built for ARM Windows apparently) and it's much costlier than some Apple Silicon running Asahi Linux.
So I'm eventually going to end up with some M1 Ultra Studio or an M4 Mini running Asahi Linux, which seems like such a complete inversion of the days when people would make Hackintoshes.
ampere?
I looked into them but they didn't seem price/performance/watt competitive.
Couldn't you switch up the decoder logic and make it a RISC-V chip and just blow away existing competition that isn't quite Pi yet?
The decoder is made by Arm who is never going to do that.
I think that is the point being made. Replace the ARM decoder with a RISC-V one and make a RISC-V chip with SoundWave performance using the RISC-V ISA.
The fact that you have to argue with ARM about what you are allowed to do is the main reason not to use ARM. RISC-V is not about cost; it is about control. ARM suing Qualcomm to stop Elite X should be everything the industry needs to know to choose RISC-V wherever possible.
If you are going to launch a chip for yourself (like Apple did with Apple Silicon) or Amazon did with Graviton, I would choose RISC-V over ARM if starting today. That is what Tenstorrent did for their platform. I can see NVIDIA releasing their own RISC-V chip.
In the case of AMD, what are their customers asking for? Probably not RISC-V at this point (sadly). So ARM makes a lot of sense for them.
To get back to the original suggestion, replacing the ARM decoder in SoundWave with a RISC-V one, I do not know how feasible that is in practice. The entire chip is designed around the ISA, especially registers and memory model. It is not like compiling Kotlin instead of Java. Or rather, it could be like that if both ARM and RISC-V instructions were designed to compile down to the same micro-architecture (but they are not).
My point is that you can't take a core like X925 and modify it because Arm won't allow it. If you want a RISC-V core you have to design it yourself.
I want a hybrid APU, perhaps an x86 host with ARM co-processors that can be used to run arm64 code natively/do some clever virtualization. Or maybe the other way around, with ARM hosts and x86 co-processors. Or they can do some weird HMP stuff instead of co-processors.
Im too dumb to know why?
Why have both to run native arm64 code? Nearly anything you'd want is cross compiled/compilable (save some macOS stuff but that's more than just CPU architecture).
My understanding is that ARM chips can be more efficient? Hence them being used in phones etc.
I guess it would let you run android stuff "natively"?
Or perhaps you imagine running Blender in x64 mode and discord in the low wattage ARM chip?
This is what the Nvidia Jetson AGX Xavier was supposed to be, but the x86 frontend wasn't shipped due to license issues.
https://en.wikipedia.org/wiki/Project_Denver
Rosetta shows translation works. Why complicate the os with multiple ISA?
Or put differently, why bake the CPU instruction sets into the chips? What Apple has shown is that emulating x86 can actually rival or be faster than a natively running x86 chip. There are currently two major ones (ARM, x86) and an up-and-coming minor one (e.g. RISC-V), and lots of legacy ones (SPARC, MIPS, PowerPC, etc.). All these can be emulated. Native compilation is an optimization that can happen at build time (traditional compilers), at distribution time (Android stores do this), just before the first run (Rosetta), or on the fly (QEMU).
Chip manufacturers need to focus on making power-efficient, high-performance workhorses. Apple figured this out first and got frustrated enough with Intel, who was more preoccupied with vendor lock-in than with doing the one thing they were supposed to do: developing best-in-class chips. The jump from x86 to M1 completely destroyed Intel’s reputation on that front. Turns out all those incremental changes over the years were them just moving deck chairs around. AMD was just tagging along and did not offer much more than them. They too got sidelined by Apple’s move. They never were much better in terms of efficiency and speed. So them now maybe getting back into ARM chips is a sign that times are changing and x86 is becoming a legacy architecture.
This shouldn’t matter. Both Apple and Microsoft have emulation capability. Apple is of course retiring theirs, but that’s more of a prioritization/locking strategy than it is for technical reasons. This is the third time they’ve pulled off emulation as a strategy to go to a new architecture: Motorola 68000 to PowerPC to x86 to ARM. Emulation has worked great for decades. It has broken the grip X86 has had on the market for four decades.
> Or put differently, why bake the CPU instruction sets into the chips?
There is more to a CPU instruction set than just instruction encodings. For instance, x86 has flags which are updated (sometimes partially) by a lot of instructions, and a stronger memory model (TSO), while RISC-V has its own peculiar ideas on the result of an integer division by zero.
> What Apple has shown is that emulating x86 can actually rival or be faster than a natively running x86 chip.
AFAIK, Apple has special support in its processors for emulating x86. It has a hardware mode which emulates the x86 memory model, and IIRC also has something in hardware to help emulate the x86 flags register.
> It has a hardware mode which emulates the x86 memory model, and IIRC also has something in hardware to help emulate the x86 flags register.
AFAIK, the memory model is the main missing piece. And it seems like it's certainly something that could be implemented separately. IMO, it's something the ARM group could (and probably should) easily add into the platform.
The flags register is a minor thing that's pretty easy to pull off. Most of the x86 instructions that mess with the flags have direct ARM instructions. The ones that don't can easily be emulated by burning a register and maintaining the flags in said register when needs be.
I think the other important thing to note is that while x86 has a wealth of exotic functions that do wild things, a lot of those instructions aren't generated by any modern compiler. Not saying you can't find a stray `ENTER`/`LEAVE` instruction in old software, it's just not likely. That significantly cuts down on weird instructions doing wild things harming performance.
A quick google about what apple did to support x86 flags is they added undocumented bits to their own flags register to help support 8080 instructions.
>while RISC-V has its own peculiar ideas on the result of an integer division by zero.
Yeah, it does. Its architects knew that it is cheaper to, when necessary, check and branch if the divisor is zero than it is to deal with exceptions.
Thus that hardware budget can instead be used for making the chip faster and more power efficient.
risc-v would have been so much cooler.
why the downvote ? an explanation please...thank you!
Now to do speculation on top of speculation on top of speculation: Valve's next vr headset deckard / steam frame is also rumored to be using an ARM chip, and with them being quite close with AMD since the steam deck custom APU (although that one was apparently just something originally intended for magic leap before that fell apart), this could be in there + be powerful enough to run standalone VR.
I don't know if the new processors will be better, but they will definitely be AI.
;-P
I have an AMD Seattle in a cupboard somewhere. https://rwmj.wordpress.com/2017/06/01/amd-seattle-lemaker-ce...
Could be an interesting chip for a future Raspberry Pi model? With Radeon having nice open source drivers, it would be easy to run a vanilla Linux OS on it. The TDP looks compatible as well.
This looks way too good for a RPi. They save money by using old and broken IP on an old fab.
i guess jeff geerling and others have been doing driver testing for them by running AMD GPUs on RPIs :P
It’s exciting to see AMD trying ARM again, competition always brings better chips for everyone.
They could do it if Apple and nVidia didn't buy all the available fab slots.
hello,
imho. (!)
i think this would be great!!
personally i totally understood why AMD gave up on its last attempt - the A1100 opterons - about 10 years ago in favor of the back then new ryzen architecture:
* https://en.wikipedia.org/wiki/List_of_AMD_Opteron_processors...
but what i would really like to see: an ARM soc/apu on an "open"*) (!) hardware-platform similar to the existing amd64 pc hardware.
*) "open" as in: i'm able to boot whatever (vanilla) arm64 linux-distribution or other OS i want ...
i have to add: i'm personally offended by the amount of tinkering of the firmware/boot-process which is necessary to get for example the raspberry pi 5 (or 4) to boot vanilla debian/arm64 ... ;)
br, a..z
ps. even if its a bit o.T. in this context, as a reminder a link to a slightly older article about an interview with jim keller about how ISA no longer matters that much ...
"ARM or x86? ISA Doesn’t Matter"
* https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter
> "ARM or x86? ISA Doesn’t Matter"
> * https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter
Some people, for some strange reason, want to endlessly relitigate the old 1980'ies RISC vs CISC flamewars. Jim Kellers interview above is a good antidote for that. Yes, RISC vs CISC matters for something like a simple in-order core you might see in embedded systems. For a big OoO core, much less so.
That doesn't mean you'd end up with x86 if you'd design a clean sheet 'best practices' ISA today. Probably it would indeed look something like aarch64 or RISC-V. So certainly in that sense RISC won. But the win isn't so overwhelming that it overcomes the value of the x86 software ecosystem in the markets where x86 plays.
You would also get rid of all the 8/16-bit shenanigans still somewhat present.
Intel had a project doing that a few years ago, called X86S. It was killed after industry opposition.
I don't think I'm using x86 for anything anymore. All the PC's in my home are ARM, the phones are ARM, the TV's are ARM and even the webservers I'm running are ARM nowadays.
You do not have a laptop? Or is it a Mac?
fingers crossed it'll eventually get a framework board
I always wonder why nobody have never released a framework mainboard with rockchip. There is even one with a - very - slow RISC-V for OS developer FFS.
Someone could plan to make a RK3688 laptop board for when the chip is released.
It would have to be integrated with other components in the laptop. The RISC-V mainboard was not made by DeepComputing alone. For Framework it was intended as a pilot study in these kinds of partnerships.
Even a framework carrier board for pi-style compute modules. There's a few third parties that are close to drop-in replacements.
More speculation?
If it was ordered by Microsoft and paid by Microsoft to be developed, fine.
But, wouldn't it make more sense for amd to go into risc-v at this point of time?
there are two predominant architectures right now (right or wrong), amd64 and arm64. Why the F would amd invest in risc when their gpus are well above intel in specs and explain the biz market approach for risc...
So they don't have to pay for ARM licensing and have a chance to compete with the upcoming cheap and fast Chinese RISCV CPUs.
> Memory support is another highlight: the chip integrates a 128-bit LPDDR5X-9600 controller and will reportedly include 16 GB of onboard RAM, aligning with current trends in unified memory designs used in ARM SoCs. Additionally, the APU carries AMD’s fourth-generation AI engine, enabling on-device inference tasks
128-bit LPDDR5X-9600 is about 150 GB/s, that's 50% better than an Orin NX. If they can sell these things for less than like $500 then it would be a pretty decent deal for edge inference. 16 GB is ridiculously tiny for the use case though when it's actually more like 15 in practice and the OS and other stuff then takes another two or three, leaving you with like 12 maybe. Hopefully there's a 32 GB model eventually...
Wow. This could really be a big deal, especially if it’s more of an openly available product than what Qualcomm has on offer.
For me personally I’d love it if this made it to a framework mainboard. I wouldn’t even mind the soldered memory, I understand the technical tradeoff there.
I'm curious what operating system will this run. Linux, Android, Windows?
Hardware shouldn't be limited to a single operating system.
> The chip is expected to power future Microsoft Surface products scheduled for release in 2026.
It looks like it is intended to run Windows Arm.
Long time ago Intel predicted ARM won't be a big deal and they sold XScale to Marvell.
XScale couldn't succeed due to Intel company politics anyway. Intel simply can't do anything other than x86 and they shouldn't try.
It's only a big deal because of x86 licensing.
Now imagine people having written Assembly x86/x64 desktop apps or inline in native code.
They will be very happy.
They should move to risc-v instead.
That will probably happen eventually, but right now RISC-V only has the hp for embedded or peripheral uses. It will continue to nip at ARM’s heels for the next 5-10 years.
RVA23 chips will definitely be seen next year.
So far, Tenstorrent promises Ascalon devboards for 2026Q2.
Performance should be similar if not above AMD Zen2 or Apple M1.
By 2027, I expect there will be no gap left to close.
Seems there's an awful lot of slideware in the high-performance RISC-V department. We'll see when/if the rubber hits the road, I suppose.
Longer term, I think the future looks bright for RISC-V. If nothing else, at least the Chinese are investing heavily into it, for obvious reasons wrt avoiding sanctions and such.
And for self-hosting and things like NAS, risc-v is already here.