Using chip memory more efficiently

Friday, July 7, 2017 - 11:02 in Mathematics & Economics

For decades, computer chips have increased efficiency by using “caches,” small, local memory banks that store frequently used data and cut down on time- and energy-consuming communication with off-chip memory. Today’s chips generally have three or even four different levels of cache, each of which is more capacious but slower than the last. The sizes of the caches represent a compromise between the needs of different kinds of programs, but it’s rare that they’re exactly suited to any one program. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have designed a system that reallocates cache access on the fly, to create new “cache hierarchies” tailored to the needs of particular programs. The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that, compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30...

Read the whole article on MIT Research

More from MIT Research

Latest Science Newsletter

Get the latest and most popular science news articles of the week in your Inbox! It's free!

Check out our next project, Biology.Net