The report finds 13.3 million NPU PCs were shipped in Q3, but how many business users need that kind of horsepower? And is it even enough for high-end LLM development? Credit: Shutterstock/DC Studio One out of every five PCs shipped in the third quarter of 2024, a total of 13.3 million units, was a PC with a neural processing unit (NPU) fine-tuned for generative AI (genAI) development, according to data published Wednesday by analyst firm Canalys. It is anticipating a rapid rise in shipments of these AI-capable PCs, surging to 60% of units shipped by 2027, with a strong focus on the commercial sector. Such machines typically house dedicated chipsets, including AMD’s XDNA, Apple’s Neural Engine, Intel’s AI Boost, and Qualcomm’s Hexagon, Canalys said in a statement. “Copilot+ PCs equipped with Snapdragon X series chips enjoyed their first full quarter of availability, while AMD brought Ryzen AI 300 products to the market, and Intel officially launched its Lunar Lake series,” said Ishan Dutt, principal analyst at Canalys. “However, both x86 chipset vendors are still awaiting Copilot+ PC support for their offerings from Microsoft, which is expected to arrive [in November].” Dutt added that there is still resistance to purchasing AI PCs from both key end-user companies and channel players. “This is especially true for more premium offerings such as Copilot+ PCs, which Microsoft requires to have at least 40 NPU TOPS [trillion operations per second], alongside other hardware specifications,” Dutt said. “A November poll of channel partners revealed that 31% do not plan to sell Copilot+ PCs in 2025, while a further 34% expect such devices to account for less than 10% of their PC sales next year.” Canalys labels the machines as “AI-capable PCs,” which is baffling, given that AI has been around for many decades and can — and has — run on all manner of PC. Someone accessing data from an LLM wouldn’t need that level of horsepower. That would only be needed for engineers and LLM developers creating the data-intensive systems. But such PCs wouldn’t necessarily make sense for most of those LLM developers, said George Sidman, CEO of security firm TrustWrx. Most developers writing LLM applications at that level would be accessing high-end specialized servers, Sidman said. “The PC has very little role. You would be running this in a large data center. These things are blocks long,” Sidman said. “You have got to look at the real world issues. With a huge multi-petabyte system behind it, well, you need that for the LLM to be effective.” Canalys disagreed. It said in its report, “With the use of AI models set to increase exponentially, associated costs to organizations from accessing cloud resources will ramp up significantly. Moving some workloads to AI-capable PCs will help mitigate this, and allow businesses to optimize their use of AI tools according to their budgets.” Regardless, would such souped-up PCs deliver better overall performance? Yes, Sidman said, but the better question is whether the typical business user would likely notice the difference, given the speeds that exist today in routine business desktops. “Will it improve some performance on the PC? Probably, but it won’t get them anything concrete,” Sidman said. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe