Qualcomm announced this Monday the launch of its new artificial intelligence (AI) chips AI200 and AI250, with those you seek compete with other companies in the sector such as Nvidia and AMD. The company has risen 11.09% on Wall Street. «Building on the company’s leadership in NPU technology, these solutions deliver rack-scale performance and superior memory capacity for fast generative AI inference with high dollar-per-watt performance, which represents a breakthrough for scalable, efficient and flexible generative AI in all sectors,» the American company explains in a statement. The AI200 chip presents «a rack-level AI inference solution designed specifically for offer low total cost of ownership (TCO) and optimized performance for inference of large languages and multimodal models (LLM, LMM) and other AI workloads.» These features allow, according to the company, «exceptional scalability and flexibility for AI inference.» For its part, the AI250 will debut with an architecture based on near-memory computingwhich represents «a generational leap in efficiency and performance for AI inference workloads, delivering more than 10 times greater effective memory bandwidth and much lower power consumption.» The firm assures that «this enables disaggregated AI inference for efficient use of hardware, while meeting customer performance and cost requirements.» «Both rack solutions incorporate direct liquid cooling for higher thermal efficiency, PCIe for vertical scaling, Ethernet for horizontal scaling, confidential computing for secure AI workloads and a rack-level power consumption of 160 kW,» he explains.