Faster computer graphics

Monday, June 13, 2011 - 03:30 in Physics & Chemistry

Photographs of moving objects are almost always a little blurry — or a lot blurry, if the objects are moving rapidly enough. To make their work look as much like conventional film as possible, game and movie animators try to reproduce this blur. But counterintuitively, producing blurry images is actually more computationally complex than producing perfectly sharp ones. In August, at this year’s Siggraph conference — the premier computer-graphics conference — researchers from the Computer Graphics Group at MIT’s Computer Science and Artificial Intelligence Laboratory will present a pair of papers that describe new techniques for computing blur much more efficiently. The result could be more convincing video games and frames of digital video that take minutes rather than hours to render.The image sensor in a digital camera, and even the film in a conventional camera, can be thought of as a grid of color detectors, each detector corresponding to...

Read the whole article on MIT Research

More from MIT Research

Latest Science Newsletter

Get the latest and most popular science news articles of the week in your Inbox! It's free!

Check out our next project, Biology.Net