Astute Group is now a founder member of Axelera AI’s Partner Accelerator Network, following a global distribution agreement. This partnership allows Astute Group to distribute Axelera AI’s high-performance, cost-effective edge AI inference solutions, which are ideal for applications like object detection, facial recognition, and industrial automation.
Stevenage, UK – Astute Group has signed a global distribution agreement with Netherlands-based silicon pioneers, Axelera AI, making them a founder member of the Axelera AI Partner Accelerator Network. This strategic partnership empowers Astute Group to offer Axelera AI’s edge AI inference hardware and software platforms for creating incredible deep-learning applications at the edge.
Under this agreement, Astute Group will leverage its extensive logistics network, deep technical expertise, and locally held inventory in key global hubs to accelerate access to Axelera AI’s breakthrough technology. Customers across the region will benefit from local technical support and streamlined procurement processes.
“We are incredibly enthusiastic about taking Axelera AI’s trailblazing, deep-learning edge AI technology to our customers. Our partnership will empower system integrators, OEMs, and developers across Europe to deploy next-generation AI applications faster and more cost-effectively,” said Chris Withers, Technical BDM, Semiconductors, at Astute Group. “AI models for object detection, facial recognition, and industrial automation are now firmly established as essential components of our future — Axelera AI is a name to watch. We’re confident they’ll significantly disrupt the AI market with superior alternatives to current offerings.”
“At Axelera AI, we believe the most effective compute solutions are born from strong partnerships,” said John Wilkins, Director of Channel at Axelera AI. “We’re pleased to welcome Astute to our Partner Accelerator Network. Their deep technical expertise and industry insight, combined with our high-performance, energy-efficient AI systems, will help customers unlock powerful, real-time analytics with greater accuracy and speed.”
AI Inference at the edge is the crucial process of running a trained AI model directly on a local device, near the data source, rather than in the cloud. This is vital for applications demanding real-time results, such as instant object detection or facial recognition, where the low latency achieved by processing data locally dramatically reduces response times from seconds to milliseconds. Furthermore, performing inference at the edge with dedicated hardware solutions can be significantly more power-efficient and cost-effective than relying on traditional, more power-hungry processors or cloud infrastructure for every task.