RAM Not Just Store Data But Computes

1 day ago 1

4AllThings Android App

- Advertisement -

By merging storage and computation, the design removes the main bottleneck in traditional computer systems, making computing fast and efficient.

Today’s AI systems solve large sets of data, the kind used in image recognition, wireless communication, and scientific research. Traditional digital processors handle these tasks slowly and use a lot of power because they must keep moving data between memory and the processor. This constant transfer creates a major bottleneck that limits both speed and efficiency.

A new analogue computing method aims to fix this. It allows the chip to do the calculations directly inside the memory itself, using electrical signals instead of digital steps. This removes the delay caused by data movement and makes the process faster and more energy-efficient. The research was published in Nature Electronics.

- Advertisement -

The researchers from Peking University, China, use resistive memory chips, which can store and process data at the same time. It first runs a quick, low-precision calculation to get a rough answer. Then, in a few more cycles, it performs a higher-precision calculation to correct the result. Together, these steps reach the same accuracy as a 32-bit digital processor while using much less energy.

To handle bigger problems, the method splits a large matrix into smaller blocks that can be processed in parallel. This means multiple parts of the problem can be solved at once, reducing the total time. In lab tests, the system solved 16×16 matrix problems with 24-bit accuracy after just a few cycles.

The memory chips are made using standard semiconductor processes and can represent several data levels in each cell. This design keeps performance consistent and easier to scale for real-world applications.

In wireless communication tests, the analogue solver matched the accuracy of digital processors but finished the task much faster. Early estimates suggest it could deliver up to 1,000 times more processing power while using 100 times less energy.

This approach could help future AI hardware run faster, cooler, and cheaper by combining memory and processing in a single device.

Read Entire Article