As Generative AI (GenAI) continues to revolutionize industries, the demand for efficient, high-performance computing platforms has increased. The Versal AI Edge VE2302 System on Module (SoM) emerges as a powerful solution to accelerate Large Language Models (LLMs) at the edge and in data centres. Leveraging the adaptability of AMD Versal Adaptive SoCs, this SoM provides an optimal balance of compute efficiency, power optimization, and AI acceleration, making it an ideal choice for next-generation AI applications.
iWave is thrilled to collaborate with RaiderChip to drive innovation in AI acceleration and together deliver high-performance GenAI LLM acceleration, seamlessly integrating hardware and software and bringing AI intelligence to edge devices. iWave specializes in the design and manufacture of Versal FPGA System on Modules.

RaiderChips’s Meta Llama 3.2 LLM on the iW-RainboW-G57M Versal 2302 System on Module, the model achieves an impressive 12 tokens per second, delivering fast and efficient AI performance. What makes this model truly versatile is its ability to easily integrate into any edge device, making it ideal for applications like chatbots, predictive analytics, and real-time AI assistants.
Experience iWave’s demo of the GenAI NPU Edge LLM on the Versal AI Edge VE2302 SoM in action. This efficient chatbot delivers responses at 12 tokens per second, ensuring fast, real-time interactions. Designed for ease of use, it seamlessly handles AI-driven conversations with minimal latency. Whether for customer support, automation, or AI research, this solution showcases the power of on-device LLM acceleration.

GenAI NPU Edge LLM demo in iWave’s Versal AI Edge VE2302 System on module

Key Technical Features of iW-RainboW-G57M Versal System on Module:
- Compatible with VE2302/VE2202/VE2102/VE2002
- Up to 328K Logic cells & 150K LUTs
- GTYP Transceivers x 8 @32Gbps
- Up to 8GB LPDDR4, up to 128GB eMMC, 256MB QSPI
- 2 x 240-pin High-Speed Connectors
- Connectivity: PCIe Gen4, Ethernet, USB 3.0
The Versal AI Edge based System on Module is compatible with an extensive series of chips: VE2302/VE2202/VE2102/VE2002. The System on Module is integrated with an 8GB LPDDR4 RAM and a 128 GB EMMC and 256MB QSPI Flash. Two high speed expansion connectors and 122 User Configurable IOs provided on the System on Module enable a multitude of interfaces available for the user.
The SOM supports a breadth of connectivity options, such as 28.21Gbps high-speed transceiver blocks to support all necessary protocols in edge applications, 40G multi-rate Ethernet, PCIe, and native MIPI support for vision sensors which are a must for advanced AI applications.
What makes Versal AI Edge ideal for AI applications
- Optimized AI Engines-ML for high performance, low latency inference
- Native support for ML data types: INT8, BFLOAT16
- 4 MB on-chip accelerator RAM extends memory hierarchy for AI performance
- AI Engines and DSP Engines for AI, vision processing, radar & LiDAR processing
- Programmable I/O to integrate any sensor, any interface
- Programmable Logic for sensor fusion and pre-processing
- Processing System for embedded compute and real-time control
Versal AI Edge SoM enabling Software Stack for GenAI:
- Vitis AI Toolkit: Provides pre-optimized AI libraries, model compression tools, and a development environment to accelerate AI models on the Versal AI Edge SoM.
- ONNX Runtime & TensorFlow Lite: These frameworks enable easy deployment of AI models on the Versal platform.
- Pytorch Integration: Popular Frameworks for deep learning and Generative AI applications, allowing developers to accelerate their Transformer-based models using the powerful FPGA-based execution.
Key Advantages RaiderChip’s Edge AI Solution:
- Complete Privacy and Control: Harness the power of the most complex LLMs locally and not rely on cloud services of third-party monitoring.
- Offline Functionality: Enjoy seamless operation in remote areas with no need for internet access or ongoing subscription fees.
- Customizable Models: Choose from commercially licensed and open-source LLMs, like Llama, or deploy customized, fine-tuned models for specific tasks
- Power Efficiency: NPU-powered inference engine enables the deployment of generative AI systems that can function off-the-grid.
Advanced possibilities with Generative AI:
- Smart Assistants: Creation of intelligent assistants capable of performing countless tasks across diverse fields such as technical support, medical diagnostics, education, customer service, and automotive applications.
- Universal User Interface: Enabling smarter interactions between users and electronic devices ranging from large industrial machinery to small household appliances through natural language.
- Advanced Predictors: Generative AI represents the pinnacle of prediction capabilities, identifying patterns and trends with greater accuracy than traditional machine learning algorithms. This has transformative applications across various sectors, including security, aviation, defense, industry, and finance.
The iW-RainboW-G57M, Versal AI Edge System on Module, integrated with the GenAI LLM acceleration, delivering unmatched AI performance, low latency, and power-efficient processing required to power next-generation NLP, conversational AI, and real-time GenAI workloads at the edge and in data centres. iWave provides a robust suite of tools, libraries, and software resources that empower developers to harness the full potential of the Versal AI Edge System on Modules.
iWave is a global leader in the design and manufacturing of FPGA System on Modules and ODM Design Services. With over 25 years of diverse experience in the FPGA domain and a strong design-to-deployment competence, iWave strives to transform your ideas into time-to-market products with reliability, cost, and performance balance.
Looking for more insights? Contact us at mktg@iwave-global.com to explore how the Versal AI Edge SoM can revolutionize your AI solutions!
For more information visit www.iwave-global.com.