Electronics Era

  • About Us
  • Advertise with Us
  • Contact Us
  • e-Mag
  • Webinars
Header logo on website
Advertisement
Advertisement
Menu
  • News
    • Industry News
    • Product News
  • TECH ROOM
    • Sensor
    • VR / AR
    • Embedded
    • Medical Electronics
    • Industry 4.0
    • Robotic
    • Automation
    • Smart Machine
    • Component
    • Manufacturing
    • Aerospace & Defence
    • Security
    • Policy
  • Semiconductor
    • AUTOMOTIVE ELECTRONICS
      • EVs
      • HEVs
      • ADAS
      • Connected Cars
    • IoT-Internet of Things
      • Development Kit
      • IoT Design
    • Power Electronics
      • AC-DC/DC-DC Converters
      • Mosfets
      • IGBTs
      • LEDs
  • T & M
    • 5G testing
    • Oscilloscopes
    • SDN & NFV
    • RF & Wireless
  • AI/ML
  • Telecom
    • 5G/6G
  • RENEWABLES
    • Sustainability
  • Future Tech
    • Data Center
    • Cloud Computing
    • Big Data Analytics
  • Webinars
  • Editor’s Pick
    • Tech Article
    • Tech Blog
    • White Papers
    • EE-Tech Talk
    • Market Research
  • EE Awards
    • EE Awards 2025
    • EE Awards 2024
  • MORE
    • E-Mag
    • Events
    • Subscription
    • Contact Us
Home AI/ML

Fujitsu Develops Generative AI Reconstruction Technology

Achieving world's highest accuracy retention rate of 89% and 3x faster inference speed with 1-bit quantization memory consumption reduction of 94%

Nimish by Nimish
September 8, 2025
in AI/ML
Reading Time: 5 mins read
Fujitsu

Fujitsu develops generative AI reconstruction technology

Share on FacebookShare on TwitterShare on LinkedIn

Kawasaki, Japan –  Fujitsu today announced the development of a new reconstruction technology for generative AI. The new technology, positioned as a core component of the Fujitsu Kozuchi AI service, will strengthen the Fujitsu Takane LLM by enabling the creation of lightweight, power-efficient AI models.

Fujitsu’s new reconstruction technology is built upon two core advancements:

  1. Quantization: A technique that significantly compresses information stored in the connections between neurons that forms the basis of an AI model’s “thought” process
  2. Specialized AI distillation: A world-first (1) method which simultaneously achieves both light weighting and accuracy exceeding that of the original AI model

Applying 1-bit quantization technology to Takane has enabled a 94% reduction in memory consumption. This advancement has achieved the world’s highest accuracy retention rate of 89% (2) compared to the unquantized model, along with a 3x increase in inference speed. This significantly surpasses the accuracy retention rate of less than 20% typically achieved by conventional mainstream quantization methods like GPTQ. This breakthrough enables large generative AI models that previously required four high-end GPUs to run efficiently on a single low-end GPU.

This unprecedented light weighting capability will enable the deployment of agentic AI on edge devices such as smartphones and factory machinery. This will lead to improved real-time responsiveness, enhanced data security, and a radical reduction in power consumption for AI operations, contributing to a sustainable AI society.

Fujitsu plans to sequentially offer customers globally trial environments for Takane with the applied quantization technology starting in the second half of fiscal year 2025. Furthermore, Fujitsu will progressively release models of Cohere’s research open-weight Command A quantized using this technology, available via Hugging Face (3) starting today

Moving forward, Fujitsu will continue to advance research and development that significantly improves the capabilities of generative AI while ensuring its reliability, aiming to solve more complex challenges faced by customers and society, and to open new possibilities for generative AI utilization.

Technology details

Many tasks performed by AI agents require only a fraction of the general capabilities of an LLM. The newly developed generative AI reconstruction technology is inspired by the human brain’s ability to reconstruct itself, including by reorganizing neural circuits and specializing in specific skills in response to learning, experience, and environmental changes. It efficiently extracts only the knowledge necessary for specific tasks from a massive model with general knowledge, creating a specialized AI model that is lightweight, highly efficient, and reliable. This is achieved through the following two core technologies:

1. Quantization technology for streamlining AI “thought” and reducing power consumption:

– Parameter compression:

* Technology compresses generative AI parameter information, reducing model size and power consumption, and accelerating performance

– Quantization error solution:

* Previous challenge: exponential quantization error accumulation in multi-layered neural networks (e.g., LLMs)
* Fujitsu’s solution: technology for quantization error propagation, a new quantization algorithm
* Technology for quantization error propagation prevents error growth through cross-layer error propagation, based on theoretical insights

– 1-Bit quantization achievement:

* 1-bit LLM quantization achieved via Fujitsu’s proprietary, world-leading optimization algorithm for large-scale problems

2. Specialized distillation for condensing expertise and improving accuracy:

– Brain-inspired optimization:

* AI model structure optimization, mirroring brain processes of knowledge strengthening and memory organization

– Model generation & selection:

* Foundational AI model modification: pruning (unnecessary knowledge removal), transformer block additions (new capability impartation)
* Diverse candidate model generation
* Optimal model selection: Neural Architecture Search (NAS) with Fujitsu’s proxy technology, balancing customer requirements (GPU resources, speed) and accuracy

– Knowledge distillation:

* Knowledge transfer from teacher models (e.g., Takane) into the selected structural model

– Beyond compression:

* Model compression with enhanced specialized task accuracy, surpassing foundational generative AI model performance

– Demonstrated results (sales negotiation prediction):

* Text QA task (sales negotiation outcome prediction, Fujitsu CRM data):
  – 11x inference speed increase
  – 43% accuracy improvement
* Student model (1/100th parameter size) achieved higher accuracy than teacher model
* 70% reduction in GPU memory and operational costs
* Enhanced sales negotiation outcome prediction reliability

– Demonstrated results (image recognition):

* 10% improvement in unseen object detection accuracy (4) over existing distillation techniques
* Significant breakthrough: over three times the accuracy improvement in this domain over two years

For more details, please visit: 
https://global.fujitsu/-/media/Project/Fujitsu/Fujitsu-HQ/pr/news/2025/09/08-01-en.pdf

Future plans

Moving forward, Fujitsu will further enhance Takane leveraging this technology, empowering customer business transformations. Future plans include lightweight, specialized Takane-derived agentic AI for finance, manufacturing, healthcare, and retail. Further technology advancements aim for up to 1/1000 model memory reduction with maintained accuracy, enabling ubiquitous high-precision, high-speed generative AI. Ultimately, specialized Takane models will evolve into advanced AI agent architectures for deeper world understanding and autonomous complex problem-solving.

Tags: FujitsuGenerative AITakane LLM
Nimish

Nimish

Join Our Newsletter

* indicates required
Electronics Era

Electronics Era, India's no.1 growing B2B news forum on Electronics and Cutting Edge Technology is exploring the editorial opportunity for organizations working in the Electronics Manufacturing Services(EMS) Industry.

Follow Us

Browse by Category

  • 5G testing
  • 5G/6G
  • AC-DC/DC-DC Converters
  • ADAS
  • Aerospace & Defence
  • AI/ML
  • Automation
  • AUTOMOTIVE ELECTRONICS
  • Big Data Analytics
  • Blockchain
  • Cloud Computing
  • Component
  • Connected Cars
  • Data Center
  • Editor's Desk
  • EE-Tech Talk
  • Electronics Components
  • Embedded
  • EVs
  • Future Tech
  • HEVs
  • Industry 4.0
  • Industry News
  • IoT-Internet of Things
  • LED & Lighting
  • LEDs
  • Manufacturing
  • Market Research
  • Medical Electronics
  • Mosfets
  • News
  • Oscilloscopes
  • Policy
  • Power Electronics
  • Product News
  • RENEWABLES
  • RF & Wireless
  • Robotic
  • SDN & NFV
  • Security
  • Semiconductor
  • Sensor
  • Smart Machine
  • SMT/PCB/EMS
  • Sustainability
  • T & M
  • Tech Article
  • Tech Blog
  • TECH ROOM
  • Telecom
  • Uncategorized
  • VR / AR
  • White Papers

Recent News

Fujitsu

Fujitsu Develops Generative AI Reconstruction Technology

September 8, 2025
SAR Televenture Limited

Seica Inc. to Debut Two Solutions at SEMICON West 2025

September 8, 2025
  • About Us
  • Advertise with Us
  • Contact Us

© 2022-23 TechZone Print Media | All Rights Reserved

No Result
View All Result
  • News
    • Industry News
    • Product News
  • TECH ROOM
    • Sensor
    • VR / AR
    • Embedded
    • Medical Electronics
    • Industry 4.0
    • Robotic
    • Automation
    • Smart Machine
    • Component
    • Manufacturing
    • Aerospace & Defence
    • Security
    • Policy
  • Semiconductor
    • AUTOMOTIVE ELECTRONICS
      • EVs
      • HEVs
      • ADAS
      • Connected Cars
    • IoT-Internet of Things
      • Development Kit
      • IoT Design
    • Power Electronics
      • AC-DC/DC-DC Converters
      • Mosfets
      • IGBTs
      • LEDs
  • T & M
    • 5G testing
    • Oscilloscopes
    • SDN & NFV
    • RF & Wireless
  • AI/ML
  • Telecom
    • 5G/6G
  • RENEWABLES
    • Sustainability
  • Future Tech
    • Data Center
    • Cloud Computing
    • Big Data Analytics
  • Webinars
  • Editor’s Pick
    • Tech Article
    • Tech Blog
    • White Papers
    • EE-Tech Talk
    • Market Research
  • EE Awards
    • EE Awards 2025
    • EE Awards 2024
  • MORE
    • E-Mag
    • Events
    • Subscription
    • Contact Us

© 2022-23 TechZone Print Media | All Rights Reserved

Advertisement
Advertisement