Parallel programming may not be so daunting

Monday, March 24, 2014 - 03:30 in Mathematics & Economics

Computer chips have stopped getting faster: The regular performance improvements we’ve come to expect are now the result of chipmakers’ adding more cores, or processing units, to their chips, rather than increasing their clock speed.In theory, doubling the number of cores doubles the chip’s efficiency, but splitting up computations so that they run efficiently in parallel isn’t easy. On the other hand, say a trio of computer scientists from MIT, Israel’s Technion, and Microsoft Research, neither is it as hard as had been feared.Commercial software developers writing programs for multicore chips frequently use so-called “lock-free” parallel algorithms, which are relatively easy to generate from standard sequential code. In fact, in many cases the conversion can be done automatically. Yet lock-free algorithms don’t come with very satisfying theoretical guarantees: All they promise is that at least one core will make progress on its computational task in a fixed span of time....

Read the whole article on MIT Research

More from MIT Research

Latest Science Newsletter

Get the latest and most popular science news articles of the week in your Inbox! It's free!

Check out our next project, Biology.Net