Images for 3D Video Games
|The stretched out graphics in the top image disappear when the new algorithms from UC San Diego are used to generate high quality images for 3D video games.|
The advance is being presented this week at one of the most prestigious computer graphics conference in the world, ACM’s SIGGRAPH 2008.
“It should be pretty easy for video game developers to integrate our research into new games. As a game developer myself, I know first hand that stretched out and flickering backgrounds and details are no longer acceptable in 3D video games,” said Alex Goldberg, the computer science undergraduate from UC San Diego’s Jacobs School of Engineering who did much of the work. Computer science professors Matthias Zwicker from UC San Diego, and Frédo Durand from the Massachusetts Institute of Technology, also contributed to the research project.
“People are looking for ways to get rid of these distortions, preferably without having to pay artists to generate background and detail images by hand. We have come up with a way to do this, and we are planning to provide code for download soon,” explained Goldberg, who recently graduated from UC San Diego and is now working for San Diego video game studio PixelActive Inc.
The 2008 SIGGRAPH paper marks an important improvement over Perlin noise, an established technique in which small computer programs create many layers of noise that are piled on top of each other. The layers are then manipulated -- like layers of paint on a canvas -- in order to develop detailed and realistic textures such as rock, soil, cloud, water and marble that serve as background images and details for 3D video games.
|Alex Goldberg's breakthrough will be used to make the images in 3D video games look better. This UC San Diego undergraduate student presented his work as a full paper at the prestigious SIGGRAPH 2008.|
“The existing methods for using computer generated noise to make images for backgrounds and details for 3D video games are fast, but the images that you get don’t look very good. Our work gives you the full computational benefit of noise but without many of the tradeoffs such as distortion and flickering,” said Goldberg.
The new approach also eliminates the need to store the textures as huge images that take up valuable memory. Instead the textures are generated by computer programs on the fly every time an image is rendered, explained computer science professor Matthias Zwicker from UC San Diego’s Jacobs School of Engineering.
“The graphics generated from the procedural approach that we explored in this project are very small. Illustrating video games with small images is going to be increasingly important in the future as more and more games are downloadable,” said Zwicker.
Alex Goldberg did the bulk of this work as an undergraduate computer science student at UC San Diego. After taking Zwicker’s rendering class (CSE168), Goldberg pursued this research both in his free time and through formalized independent study classes supervised by Zwicker. Goldberg also took the famed video game crash course (CSE125) in which teams of UC San Diego computer science undergraduates create 3D networked video games in 12 weeks.
“Getting a paper into SIGGRAPH is an accomplishment for any senior researcher in computer graphics. Presenting a paper in SIGGRAPH based on work done as an undergraduate is astonishing. We’re very proud of Alex and his work,” said Keith Marzullo, professor and chair of the Department of Computer Science and Engineering at UC San Diego’s Jacobs School of Engineering.
“I’ve never given a talk in front of more than 30 or 40 people. At SIGGRAPH, the audience will be 300 or 400,” Goldberg said. “But I’m excited. These are exactly the people I want to show my work to.”
Pixel Packing (More technical info)
Both the stretch marks and the flickering in 3D video game backgrounds often stem from the same technical issue: choosing what color to make individual pixels.
“When one pixel covers a large area in a 3D video game landscape…what color should that pixel be? It can only be one color, but the area it covers may contain many different colors,” UC San Diego computer science professor Matthias Zwicker explained.
Color averaging is one solution. For example, if a pixel covers a patch of tiny black bumps on a piece of armor on a soldier far in the distance, and if these armor bumps are partially lit up with white light, then averaging the colors and turning the pixel gray is often in order. But before you can average colors, you have to determine the exact region of the scene that needs to be squeezed into one particular pixel. A simple solution is to slide circular areas of scenes into circle shaped pixels. But when you are trying to map areas of 3D scenes back to 2D pixels, circular areas of background images are not the best option even though the pixels are circles themselves, according to the computer graphics researchers.
In the SIGGRAPH paper, the computer scientists mapped elliptical areas of background images back to circular pixels and found that their technique yielded higher quality background images with less stretching and other distortions.
The reason elliptical shapes are a better fit for circular pixels in backgrounds for 3D video games goes back to basic geometry: when a cone that extends from a circular pixel intersects with the background of a 3D video game scene, the region of the cone that hits the background is an ellipse rather than a circle.
"Anisotropic Noise,” by Alexander Goldberg and Matthias Zwicker from the Department of Computer Science and Engineering at UC San Diego’s Jacobs School of Engineering and Frédo Durand from the Massachusetts Institute of Technology.
Media Contact: Daniel Kane, 858-534-3262