Saturday, February 21, 2009

Virtual Reality and 3D Input and Output

From the readings:

"3D Input and Output" from The Computer in the Visual Arts by Anne Spalter, Addison Wesley Longman Inc. 1999, pp 297-316

Virtual Reality and digital modeling go on trial for a federal courtroom design by Alan Joch
http://archrecord.construction.com/features/digital/archives/0501dignews-1.asp

2D input and output devices are restrictive when working with 3D data, but access to 3D input and output equipment is limited because these devices are expensive and not readily available to the general public. There have been various 3D input devices created, but all present different challenges and none have emerged as a standard.

A 3D mouse uses sonar to track its movement through space. While the 3D mouse can perform operations in 3D programs quickly, it is often inaccurate due to noise interference and it can be cumbersome to use over a period of time. Joysticks are commonly used in video games. These devices can navigate 2D and 3D space and buttons can be programmed to perform certain functions, such as jumping or firing in a video game. Gloves are best at performing predefined postures within virtual reality, such as pointing in a direction to move, but are not accurate enough for detailed work. Dials are a very accurate 3D input device that allow the user to control virtually any 3D property. They are not a direct translation of bodily movement and remain removed from the process, similar to using a mouse for drawing, but they do not cause fatigue. Force feedback devices provide tangible information such as the feel of objects and textures in a virtual environment. While extremely convincing, these devices are very expensive and can be very dangerous. Trackers track bodily movement in real space by using six cameras in a golf ball sized device. They are seen quite often in Hollywood special effects studios to map the motions of actors for use on a 3D model. 3D scanners and digitizers create digital 3D data from existing real world forms.

For the most part, 3D digital data is represented on a flat two dimensional screen. The goal of 3D output devices is to create a realistic way to view 3D information. This requires the use of stereo vision, which is how humans perceive depth in the real world. Two different view are brought together in order to see in three dimensions. To simulate this process digitally, two slightly different views are created. These views are called a stereo pair. The viewer can then blend these two images in a few different ways by focusing a distant point (very difficult for most people) or with the help of stereoscopic glasses. These glasses cause each eye to see a different image, and the brain must complete the depth perception.

Virtual reality uses stereopsis but surrounds the user with digital information so that the user is immersed in the digital realm or virtual reality. This can be accomplished with the use of a head mounted display or a binocular omni-orientation monitor. There are also special rooms called CAVEs that allow users to enter into a space with images projected on each wall surface. By using stereoscopic glasses, the users perceive the space as a virtual 3D environment. This is the technology used in the courtroom design, and seems on the surface to prove successful for understanding sightline issues in the courtroom. Lighting design was also refined by using the CAVE but the designers admit that this sort of environment is not very effective at portraying real world lighting and shadow.

I was not fully aware of the technical advances in 3D input and output but with the popularity of the Wii and its controller the Wiimote, it is certainly easy to see the real world application of a 3D input device. The Wiimote offers six degrees of freedom for interaction with the video game. I have always been fascinated by the philosophical issues regarding virtual reality and just last semester read exerpts from Baudrillard's "The Precession of Simulacra." Baudrillard's outlook is so apocalyptic , that somehow the world is disappearing in a haze of imagery and meaning. I tend to relate, rather simplistically I admit, virtual reality to a really good book. Both require the participant to immerse themselves in the world, both can take the participant to alternate realities and both provide entertainment. Neither has of yet become confused with our common shared reality and according to Henry Fuchs, technology may never let us confuse the two. (p. 316) And whose to say we all live in the same reality anyway.
This was my attempt at using Podium with SketchUp. I know it's supposed to be an easy interface, but it's a little frustrating not knowing how to control the effects you're getting. For example, my lovely striped floor. Sun settings would change the size of the stripes but not remove them. Changing the reflectivity of the floor diminished them some. I'm guessing that maybe there is something in the model that is allowing light to seep through, but I'm not sure. Other than that, I think it produces fairly decent renders. I especially like the predefined light fixtures. As you can see I used a few.

Tuesday, February 17, 2009

Rendering Exercise


Saturday, February 14, 2009

Project Investigation Abstract



In this research project, I will be looking at photorealism and point of view in rendering digital spaces. Architects and designers use photorealistic images to tell the story of their design. In a still image, a large portion of this story is determined by how the image creator frames the image and what point of view is represented. In 3d animations, the designer has time as a tool for telling a story, but with the single 2d rendered view, every detail of object placement, lighting and presence or absence becomes integral in understanding a space and the designer’s intent.

For this project, I will take an existing iconic image, Edward Hopper’s Nighthawks, and recreate it digitally. I have chosen this image because it is a classic art image that deals with interior and exterior space. The image also tells a story of confinement with an empty urban street and three lonely diners who are each lost in their own thoughts. It is apparent that there is no way out of the bar area, as the three walls of the counter form a triangle which traps the attendant, and the diner has no visible door leading to the outside. By recreating this scene digitally, it will be possible to explore points of view that Hopper chose not to paint, providing a better understanding why this one view serves to tell the story most fully. Plus it will allow an exploration of the space of the diner that is only hinted at in the painting.

I suspect that creating the scene will be a creative endeavor in interpreting the existing data from Hopper’s painting. So although the digital model may not be an accurate rendition of the actual diner, (which once existed in Greenwich Village) it will be an interpretation of Hopper’s interpretation. I expect to find many views that represent many emotions or ideas, and hope to have a better understanding of how Hopper chose this particular view of the diner to portray this story.

Rendering

Spalter’s chapter on rendering clearly demonstrates the issues involved in rendering a 3D model. A rendered scene is a raster image derived from a three dimensional model. It is a two dimensional view or screen shot of the model. The two qualities that must be present for a rendered image are material properties for each object or surface and lighting.

Several elements affect the designation of materials in a rendered scene: light absorption, light reflection, light transmission and texture. Light absorption determines the color of the object or surface. There are two types of light reflection. Diffuse reflection is not dependent on the viewer’s angle of view. Diffuse reflections reflect light equally from any viewing angle. Specular reflections are dependent on the viewer’s viewing angle, and reflect light only in a “direction equal and opposite to the direction of the light shining on it.” (p.261) Light transmission determines the transparency of an object. Texture provides visual complexity to rendered objects. Texture mapping involves wrapping a 2D image around an object, creating the illusion of pattern. Solid textures are through body textures and are present even when an object is split in two. Bump mapping provides a way of achieving the look of a modulating surface without actually displacing the surface, while a displacement map is used to displace the surface and change the geometry of the object. (p.266)

Lighting is crucial for a successful render. Without lighting, none of the geometric or material qualities of a model will be visible. There are five main types of lights in 3D rendering programs: ambient light, point sources, spotlights, area sources and directional, remote lights. Ambient light is general non-directional lighting that lights all objects evenly. Point sources emit light from a single point, such as a candle flame or light bulb. Spotlights are shaped light, that mimic many everyday lighting sources. Area sources provide light from an entire surface, such as a florescent fixture. Directional remote lights simulate distant light sources such as the sun. These sources are certainly directional, but their distance from the model, allows them to light each object evenly from one direction.

Photorealistic rendering is complex and requires the computer to calculate many complex equations. For this reason, many rendered images can take quite sometime to render, depending on the complexity of geometry, materials and lighting. Raytracing and Radiosity are the two global rendering methods that Spalter discusses. Raytracing works by “considering the paths of light rays as they bounce around a scene illuminating objects.” (p. 279) Radiosity is more accurate and incorporates more surface interactions than raytracing. Radiosity is not view dependent and therefore does not render specular reflections. A combination of raytracing and radiosity are necessary to achieve a rich environment with reflections and specular highlights.

I work with rendering almost everyday, so I was very familiar with the concepts Spalter mentioned in her chapter. I was also interested in the research at Cornell Computer Graphics program. It was nice to gain a better understanding of the vocabulary that I tend to toss around lightly. I think reading this chapter and the web articles about Cornell and Greenburg, have given me a greater intellectual understanding of the tools that I have used intuitively by trial and error.

I think photorealistic rendering is just one of the delightful processes of modeling and presentation drawings. Certainly it can be used to understand a design intent, or to grasp how light will affect a space, but I don’t really think that it is necessary in every design case to have photorealistic images. There’s a real “wow” factor when presenting a photo image to a client or even in house to fellow designers, but given the amount of time it takes to render these images, it doesn’t seem to be useful as a design tool. Plus there are so many tricks to getting accurate lighting or even unreal lighting to make a scene more dramatic. This doesn’t inform design decisions as much as impress clients. However, the process of making them is fun! And that might be enough to keep photorealistic rendering in the realm of architects and designers and not just a job for illustrators or artists. In the near future, when it takes five seconds instead of five hours to create accurate renders, then it will be feasible to utilize these images in the earlier design process. But currently I think their rightful place is as a presentation and sales tools.

Trip to NC State Furniture Manufacturing Center


What an informative trip. Our tour consisted of four labs including rapid prototyping, furniture testing, woodworking and CNC labs. While all of the labs were intriguing, containing complicated machines for creating real world objects, I was most interested in the rapid prototyping lab. Here, large machines could turn plastic, powder, metal and even moon dust, into any imagined shape. The projects we saw completed and in process were titanium blocks for aircraft, models of chess pieces, objects d'art and small machine parts. Each machine worked with a different material and because of that utilized a different process when “printing.” For example, the machine that utilized melted plastic, extruded it through a tiny pin dot to build layer upon layer of melted plastic. It also printed a second material to support the plastic where the object was hollow beneath. We saw a 3d printer similar to the one used by Morphosis, that lays down a layer of powder and then moistens only what will be the final print or model. Because the layers of powder occur at each level, this machine does not use a second material for support. Most interesting though, was the machine that could work with moon dust to create objects from dirt. One day, perhaps it will be on the moon creating the first structure in outer space from the moon’s raw materials.

Tuesday, February 10, 2009

SketchUp Chair Tutorial

3D Printing

Spalter's article was a beneficial introduction to the types of automatic fabrication technology currently available. She begins with the distinction between rapid prototyping (RP) and computer-numerical-control (CNC) machines. CNC machines take existing materials such as metal and plastic, and sculpt objects. These machines require constant supervision and require that objects are designed in keeping with a specific material's capabilities. RP machines use additive technology to layer materials such as powder or resin to create 3d objects. The types of automated fabrication covered in her article are stereolithography, robotically guided extrusion, laser sintering and droplet deposition on powder.

Morphosis, an architecture firm in Santa Monica, CA, uses the droplet deposition on powder to create 3d prints of their digital models. These models are then used as discussion pieces to help refine the design. "Being able to see two versions side by side is sometimes more informative than viewing two renderings." (Morphosis p.2)

I was aware of 3D printing and rapid prototyping on a theoretical level. I knew it existed and I knew people used it, but I was not aware of the many types of fabrication machines available. Spalter's article and MIT's Digital Design Fabrication site were helpful in understanding these different types and grasping what they do. I was shocked by the claim that these machines were now affordable. It's a claim reminiscent of early computers that only large firms could afford. It seems there is quite a ways to go, before every office has a desktop 3-D printer.

It is obviously a beneficial tool for designers to have, given they have embraced the digital 3-D model way of exploring their design. 3-D printing allows designers to observe their design in real space and it has the tactile qualities of a handmade model without the time investment. I think more interesting for designers, is the ability to design forms, objects, and building components that can be directly manufactured from the computer. The possibilities for construction become endless and the quality of work will be as good as your CAM.

Friday, February 6, 2009

Touch Screen Technology

This video introduces you to Jeff Han, one of the pioneers in touch screen technology. It's fun just to watch him, so I can imagine how using this kind of technology could make computers more enjoyable. (iphone) He makes an excellent point, that we shouldn't raise a new generation of kids to interact with the computer like we currently do. So here is part of his solution. Touch Screen Technology