Summary:
-
Breakthrough Technique
Researchers at BitEnergy AI have introduced Linear-Complexity Multiplication (L-Mul), a method that can reduce AI model energy consumption by up to 95%. -
How It Works
L-Mul replaces complex floating-point multiplications with simpler integer additions, making calculations faster and more energy-efficient without losing accuracy. -
Significant Energy Savings
By using L-Mul, the energy cost of floating-point tensor multiplications can drop by 95%, while energy for dot products can be reduced by 80%. -
Minimal Performance Trade-off
Tests across various AI tasks show only a 0.07% average performance drop, indicating that the energy savings come with negligible sacrifices in accuracy. -
Hardware Requirements
To fully exploit L-Mul’s benefits, specialized hardware is necessary. Plans for developing hardware that supports L-Mul calculations are underway, aiming to unlock its full potential.
Read more at: Decrypt | arXiv Research Paper