Tuesday, March 22, 2011

Motivation for Lighting

Lighting 1

In the first of my light rigs I was trying to light a metal dish to get the full effects of the metal. At the same time I wanted the dish to stand out from the background.


Lighting the metal was more difficult than I thought it would be. I looked at metal objects and noticed that they contained bothers very dark colours, almost black, and very light colours, almost white. Therefore I added a backlight to try and bring out some of the whiter colours around the edges of the bowl. My Key light I placed on the upper left side of the bowl so that it would cast light on the area of the dish, which was furthest back. This would bring the dish forward out of the dark light behind it. The Fill light I placed to the left of the camera, slightly lower down than the Key Light. This was once again an attempt to catch sections of light on the bowl to create the light and dark contrast. Another way I tried to get a contrast into the bow was through placing the shadow umbra value down to 0 on certain lights light.


In order to get the dish to stand out from the background I isolated the lights, which would hit both the dish and the wall so that they only hit the dish. Therefore I could place a bright lighting onto the dish without worrying about it lighting the wall. I then added a light, which only lit up the wall and took the intensity down to 0.2 so that it just allowed one to see the texture without it detracting from the dish.


Lighting 2



For my second lighting rig I tried to create an evening scene where the light was cast by a lamp, which had been cut out of the image. But the lamp would be positioned to the left of the chair.

I looked at household lamps and noticed that most of them let off a yellow tinted light. For this reason most of the lighting in the scene has been given a yellow tint. The predominant light source was coming from the left hand side (where the lamp would be). Therefore the Key was placed at the left the camera. The backlight was placed at the top left behind the chair and was given a strong orange tint, to mimic the light colour of a lamp.


Additionally I needed to light the window so that it didn’t look pitch black and portrayed its reflective qualities. Therefore I aimed a light specifically at the window, which I tinted a dark blue to try and lighten the over all look of the window.


The right hand side of the chair was deliberately left very dark, as there is no light source lighting it.


Lighting 3



In the final Lighting Rig I did I wanted to create a late afternoon look with the sun coming through the windows. I noticed that afternoon sun was very warm in colour and the shadows were relatively harsh but not as harsh as those cast by midday sun.


My Key Light came from the left of the chair as though it was coming through the left window. The Fill Light lit up the chair from the front as well as the wall behind it, which would otherwise have been in complete shadow. I gave the Fill light a very slight yellowy orang tint to create the warm feeling of afternoon sun. And my backlight was tinted quite orange to allow the chair to glow slightly from the right. As the light was streaming into the room from the left to the right I had to additionally light the back wall to look as though light was coming through the window onto it. Therefore I placed a light outside the window, which cast the shadows of the window frame onto the wall. This light also had an orange tint added to it.

Monday, March 21, 2011

Lighting Research Essay

Three-point lighting

Three-point lighting is the customary lighting technique used in most visual media. It consists of three lights “called the key light, the fill light and the back light” (mediacollege.com).

The key light “represents the most dominant light source, such as the sun, a window, or ceiling light – although the Key does not have to be positioned exactly at this source” (3D Render.com). Therefore the key light will create “the subjects main illumination, and defines the most visible lights and shadows… and creates the darkest shadows” (3D render.com).

Jeremy Birn suggests that one uses a spot light for the key light and that you offset the Key Light 15 to 45 degrees to the side of your camera, from the top view. From a side view, he says, that you should raise the Key Light above the camera also 15 to 45 degrees (3D Render.com).

The purpose of the Fill light is to soften and extend “the illumination provided by the key light, and makes more of the subject visible” (3D render.com). While the key light creates the most dominant light source the Fill light will create any secondary light source such as “the sky (other than the sun)… table lamps or reflected and bounced light in your scene” (3D render.com). According to Birn the best choice of light for a Fill light is a spot light however one could use a point light (3D render.com).

In terms of the placement of the Fill Light, one should place it on the opposite side to the Key Light, however one should not place them exactly symmetrical to each other (3D render.com).


Because the Key Light is the predominant light in the setup the “Fill Lights can be about half as bright as your Key (a Key-to-Fill ratio of 2:1), [however if one wanted to create] a more shadowy environment, use only 1/8th of the Key’s brightness (a Key-to Fill ratio of 8:1) (3D render.com).

However when it comes to creating shadows from a Fill light they are optional and predominantly not used. Yet to “simulate reflected light, tint the fill colour to match the colours from the environment. Fill lights aare sometimes set to diffuse-only (set not to cast specular highlights)” (3D Render.com).

Finally there is the Back Light, which is also known as the Rim Light, “creates a bright line around the edge of an object, to help visually separate the object from the background” (3D render.com).

When placing the Back Light, from the top view, “position it behind your subject, opposite from the camera. From the right view, position [it] above your subject” (3D Render.com).

How bright one decides to make a Back Light is dependant on how strong one wants the highlight on an object. Usually a Back Light will cast shadows. A Back Light will often be linked to the object which one is lighting so that it seperates it from the background (3D rendering.com).

This lighting rig is the most commonly used one as it can be used to light almost any scene or object. Furthermore due to the use of the three light all placed at different angles one has absolute control of how the light will look from each angel.

Creating Reflective Studio Light Set up (Taken from Digital Tutors tutorial on “Creating a Studio Light Setup in Softimage)

This Light set up is made up of 2 lights, a Key and a Fill Light. Additionally there are 3 reflectors, one of which acts as a skydome for added indirect illumination.

Initially one will create the Key light, for this light setup a Box Light is the best choice. As a Box Light allows one to create soft shadows. In order to create the desired soft shadows set the umbra value to 0 and then adjust the Area Transformation scaling to 30, for both the x and the Y values. It is through increasing this scaling that one can create softer shadows. For this particular light set up the Key light will be placed behind the object. As this light is our main light source the light intensity can be set to 0.75.

Next one will create the Fill Light, which will have the same settings as the Key Light in terms of the umbra and Area Transformation scaling. However as the Fill Light is a secondary light source the light intensity will be set to 0.5. The Fill light will be placed in front of the object.

Although there are two light sources in the scene, which are responsible for creating shadows, there is no evidence of direct light sources, which are creating reflections. This can be achieved through creating reflectors/diffusers.

To create the reflectors simply draw it using the ‘Draw Cubic by CV tool’. Duplicate the curve, move the duplicate along and create a loft surface mesh. It may be necessary to raise the U subdivisions as to create a smoother surface. The best material to mimic the effect of a light-emitting surface is a constant. Therefore one will apply this material to the object and set the colour to white.

Depending on the object, which you are lighting, and how you would like the reflections to be positioned, one can move the reflector around until it looks right. Additionally a secondary reflective source may be necessary to achieve the required results. In order to adjust the brightness of the diffuser one must change the HSV V component to a higher value, in this case to 3.

At this point the object possesses both shadows and reflections. However the over all lighting of the object is still quite dark. As seen, in the screen shot, bellow.


This can be overcome by adding some secondary illumination. In order to do this one must initially enabling final gather. While this will lighten the object significantly it lights it in an overall manner. Therefore if one creates a dome around all of the objects and applies a graded constant material, which grades from white to black, the secondary illumination to reflect mainly from the bottom part of the dome therefore only effecting the bottom section of the object and leave the top of the object to be light by the two diffuses and light previously set up in the scene.

Works Cited

3d Render.com. Three-Point Lighting for 3D Renderings. Jeremy Birm. Web. 17 March 2011

Digital Tutors. Creating a Studio Light Set Up in Softimage. PL Studios. 2009. 21 March 2011.

Media College.com. The Standard 3-Point Lighting Technique. Np. Web. 17 March 2011

Sunday, March 13, 2011

Motivation for textured Room

My intention when texturing this room was not to make it look overly realistic. Yet I wanted to make sure that it still looked very believable. Due to the design of the objects in the room, which are very realistic, the room already inhabited the element of believability however my attempt to not make it look too realistic proved to be quite difficult.

Therefore my main challenge when decorating the room was, ultimately, to downplay the realism. I attempted to do this through caricaturing the textures. My idea for caricaturing the textures came from the book The Art Of Up where they explain how they played around with the texture sizes when making UP, however, they state that, “there is a point where the fabric textures, if you blow them up too far, start to look like a potato sack, like a burlap (20). Therefore I knew I had some constraints to be aware of when attempting to do this.

Additionally, I found when texturing the room there had to be a balance between exaggerated textures and more realistic textures to create a believable look. Furthermore I had to use real textures and not just phongs to create the exaggerated look.

Therefore the most prominent texture that I have blown up is the one that I used for the brick walls. I found that it was important that this particular texture was caricatured, as it encloses the room and one would relate most things in the room to it. However the carpet texture I made quite realistic as I found if I made it too large it didn’t relate very well.

The next obvious texture to caricature would be the couches. Because textiles naturally vary in size it gave me a lot of room to play with, especially in terms of how big I could go. Finally the texture used for the ornaments in the room I kept quite natural and realistic as I wanted the glass to read as glass, where if I tried to caricature that too much it would not have. Additionally I left the wood quite a realistic size as, once again, I wanted it to read as wood. However to bring the element of caricaturing back to and around the these objects I enlarged the texture of the rug quite drastically.

Additionally I wanted the room to have a warm, playful and relaxed atmosphere. To achieve this I kept the walls and carpet very simple, with face-brick walls and a natural carpet colour. Which I feel managed to give the room quite an open feeling as it isn’t over imposing. Additionally it keeps it simple and doesn’t distract ones eye from the busier texture that were applied to the objects in the room.

I allowed myself to be more adventurous when it came to the couches, cushions and curtains, where I played around with colours and textures. I tried to carry the colours through the room in a playful manor by colouring the piping of the armrest in a different colour to the chair itself. But I didn’t want the room to look over claustrophobic due to too many colours and patterns, for this reason I mainly stuck to plain colours with the two patters, on the curtains and the couch, for extra effect.

Works Cited

Hauser, Tim. The Art Of Up. California: Chronical Books LLC. 2009. Book

Texturing Research Essay

According to the Softimage Users Guide a texture map consorts of “an image file or a sequence, and a set of UV coordinates. They are similar to ordinary textures, but are used to control operator parameters instead of surface colours”.

2D texture mapping is when “2D images [are] wrapped around an object’s surface, much like a sheet of rubber that’s wrapped around an object. To use an image texture, you start with any type of picture file (PIC, GIF, TIFF, PSD, DDS, etc.) such as a photo or a file made with a paint program” (Softimage User Guide). One uses a UV map to make sure that the image is ‘wrapped’ correctly around the object. Furthermore it is possible to alter the UVs as to make sure the texture is projected onto the object without distortion etc.





(Softimage Users Guide)


Where as 3D textures “are generated mathematically, each according to a particular algorithm.Typically, they are used for gradients, repeating patterns such as checkerboards, and fractals that mimic natural patterns such as wood, clouds, or marble…3D textures are projected “into” objects rather than onto them. This means they can be used to represent substances having internal structure, like the rings and knots of wood” (Softimage Users Guide).

There is one draw back to 3D texture mapping because “they [do] take longer to render than projected 2D textures. [However] procedural textures do give a better rendered result close up because pixels can be recomputed (rather than interpolated) at close proximity” (Softimage Users Guide).

But 3D textures have multiple benefits one of the most important is that “because 3D procedurals exist throughout 3D space, you often get good results on objects that would otherwise be hard to map. Instead of trying to wrap a 2D texture around a complicated sculpture, you can apply a 3D procedural and it appears to be perfectly mapped” (maya-doc.com).

(Softimage Users Guide)

One can apply 3D textures through “Softimage’s shader library, [which] contains both 2D and 3D procedural textures” (Softimage Users Guide).

Projecting Textures

Each texture must be associated to a texture projection. The projection controls how the texture is applied across the surface of an object” (Softimage Users Guide).

Planar texture mapping “projects a texture along an axis onto the specific plane” (Softimage User guide). There are three options for how you would like your texture projected, either along the xy axis, the xz axis or along the yz axis.

Cylindrical projection allows one to “project a texture from a virtual cylinder around an object towards the central axis of the cylinder” (Softimage User guide).

Spherical Projection “maps a texture onto an object similar to a beach ball, with some distortion at the +Y and – Y poles” (Softimage Users Guide). In other words it will pull the texture from the +Y pole to the –Y pole of an object.

UV projections, unlike the previous two mentioned, can only be applied to NURB surfaces. “UV projections follow the UV parameterization of NURBS surface objects (no relation to texture UV coordinates). A UV projection behaves like a rubber skin stretched over the object’s surface. The points of the object correspond exactly to a particular coordinate in the texture, allowing you to accurately map a texture to the object’s geometry” (Softimage User Guide).

Camera Projections: “A camera projection projects a texture from the camera onto the object’s surface, much like a slide projector does. This is useful for projecting live action backgrounds into your scene so you can model and animate your 3D elements against them. Changing the camera’s position changes the projection’s position. Once you have positioned the texture on the surface to your liking, you can freeze the projection” (Softimage Users Guide).

A Cubic Projection is “when you apply a cubic projection to an object, the object’s faces are assigned to a specific face of a cubic texture support, based either on the orientation of their polygon normals or their proximity to a face. The texture is then projected from each face of the support using a planar or spherical projection method.

Unique UVs Projection can only be applied to polygons and there are two possible ways of doing it:

1. Individual Polygon packing, which “assigns each polygon’s UV coordinates to its own distinct piece of the texture so that no one polygon’s coordinates overlap another’s.

This is useful for render mapping polygon meshes. Typically, you apply textures to an object using a projection type appropriate to its geometry. Then you can rendermap the object using a new Unique UVs projection to output a texture image that you can reapply to the object. The texture is applied to texture each polygon properly without you worrying about “unfolding” it to fit properly” (Softimage Users Guide).

However as usefull as this form of mapping my sound it has one draw back. “A polygon-packing style Unique UVs projection only produces good results if you use a texture created specifically for the projection, for example, an image created using Render Map” (Softimage Users Guide).

2. Angle Grouping is applied after you have decided on a projection direction, you can then group “together neighboring polygons whose normal directions fall within a specified angle tolerance. This process is repeated until all of the object’s polygons are in a group. The groups — or islandsare then assigned to distinct pieces of the texture so that no two islands’ coordinates overlap each other” (Softimage Users Guide).

Contour Stretch Uvs Projects, is another method of U mapping which can only be applied to polygons. This method allows “you to project a texture image onto a selection of an object’s polygons. Rather than projecting according to a specific form, a contour stretch projection analyzes a four-cornered selection [of nodes] to determine how best to stretch the polygons’ UV coordinates over the image.

Contour stretch projections do not have the same alignment and positioning options as other projections. Instead, you select a stretching method that is appropriate to the selection’s topology and complexity… [They are] useful for a number of different texturing tasks, particularly for applying textures to tracks and roads on irregular, terrain-like meshes” (Softimage Users Guide).

Unfold Projection “creates a UV texture projection by ‘unwrapping’ a polygon mesh object using the edges you specify as cut lines or seams. When unfolding, the cut lines are treated as if they are disconnected to create borders or separate islands in the texture projection. The result is like peeling an orange or a banana and laying the skin out flat” (Softimage Users Guide).


(Softimage Users Guide)

Works Cited

Maya-doc.com. Texture Mapping. No place of publication. No date. Web. 12 March 2011

Softimage. Softimage Users Guide. Autodesk, 2011. Software.

Sunday, March 6, 2011

Modling Research Essay

“3D modeling… is the process of developing a mathematical representation of any three-dimensional surface or object..” (Wikipedia). Most 3D programs “offer several types of geometry” to model with (Softimage User Guide). Namely one can use polygon meshes or sub-division surfaces, when creating polygonal models, or NURB curves or surfaces, when creating NURB models (Softimage User guide).


Polygonal Modeling


“Polygonal modeling is an approach (to) modeling objects by representing or approximating their surfaces using polygons” (Wikipedia). “A polygon is a closed 2D shape formed by straight edges. The edges meet at vertices. There is exactly the same number of vertex points as edges. The simplest polygon is a triangle” (Softimage User guide). A 3D object can be generated out of combining numerous polygons together to make a polygon mesh.


There are several ways to create and manipulate a polygon mesh. The most common and quickest is by selecting a primitive polygonal mesh object from the tool bar in your 3D modeling software package. Alternatively one can “build polygon meshes from curves by performing operations such as extruding, lofting and revolving” (Softimage users guide). Once you have created your polygonal mesh one can manipulate in various different ways. Such as through scaling, rotating and moving the actual edges, vertices or faces to achieve your desired shape or one can merge two or more polygon meshes together.


The geometry of a polygon meshes “is mathematically simple and quick to calculate” for this reason “they are particularly useful when modeling for games and other real time environments where speed is important… However the main draw back of polygon meshes is that they are poor at representing organic shapes – you may require a very heavy geometry (that is, many points) to obtain smoothly curved objects… Another huge advantage of polygon meshes is that “you can apply materials and textures to selected polygons instead of the whole mesh” (Softimage User guide).


Sub-Division


Sub-Division surfaces “consist of a low-resolution polygon mesh hull that controls a higher-resolution polygon mesh object. They provide many of the benefits of polygon meshes, plus the ability to approximate smooth surfaces without the need for heavy geometry” (Softimage users guide).


There are three ways to subdivide polygon meshes:

1) By generating a new object, which entails that one creates a “new high-resolution polygon mesh from a low-resolution one. As long as there is a modeling relation between the two objects, you can modify and animate the low-resolution object to drive the high resolution one” (Softimage users guide). In simpler terms this means that the low-resolution object drives the overall shape of the high-resolution subdivided polygon. However you can still move points on the high-resolution subdivided mesh independently of the low-resolution one (Softimage users guide).


2) By modifying the geometry approximation, where you turn “the polygon mesh object into a subdivision surface by applying a geometrical approximation property… The original mesh becomes the control of cage of the new geometry… The advantage of this method is that no new geometry is actually created, so scene files can still be quite small. However the disadvantage, of this method, is that you can’t manipulate individual points etc., on the high-resolution geometry… (you can only manipulate) the components that correspond to components on the control cage” (Softimage users guide).


3) Local subdivision, allows on to “add subdivisions locally to selected polygons” in a mesh. This method adds an operator to the polygon mesh’s operator stack and modifies its topology. (However it is useful) for adding detail exactly where you want it. Although new geometry is created you control the amount and the location” (Softimage users guide).

Additionally one can “combine these methods to obtain the effect you want” (Softimage user guide).


NURB modeling


“Non-uniform rational basis spline (NURBS) is a mathematical model commonly used in computer graphics for generating and representing curves and surfaces which offers great flexibility and precision for handling both analytic and freeform shapes” (Wikipedia).

“NURBS curves, are cubic or linear splines… Linear curves are composed of straight segments and cubic curves are composed of curved segments… You cannot render them” but you can convert them into polygonal meshes, which will render (Softimage user guide).

There are two main ways which one can create curves:


1) Drawing curves by placing control points or knots by simply clicking to place them where you want them. The four commands one can use to create a curve in this manor is either to “Draw cubic by CVs, Draw cubic by Bezier- knot points, Draw cubic by knot points or Draw Linear” (Softimage user guide).


2) “By holding the mouse button down and dragging continuously, as if you were sketching with a pen. This method creates cubic curves only” (Softimage users guide).


However one can also import curves into a 3d modeling program from Encapsulated PostScript or from Adobe Illustrator. Once these curves are in the 3D program one can convert them into a polygonal mesh to use in your model.


A second form of NURB modeling is through NURB surfaces. “Surfaces are two dimensional NURBS patches defined by intersecting curves in U and V directions. In a cubic NURBS surface, the surface is mathematically interpolated between the control points, resulting in a smooth shape with relatively few control points. The accuracy or NURBS makes them ideal for smooth, manufactured objects like cars and areoplane bodies. One limitation of surfaces is that they are always four sided. Another limitation is that they so not support different textures on different areas” (Softimage Users guide).

Finally the last type of NURBS modeling is Surface meshes, which “are quilts of NURBS surfaces acting as a single object. They overcome limitations that surfaces must be four-sided; with surface meshes, you can create complex objects and characters with holes, legs and so on” (Softimage User guide).


In conclusion it is apparent that there are numerous ways to successfully model in 3D, while each way provides individual benefits they additionally provide certain drawbacks. Therefore it is important to chose your modeling method according to the object you are modeling as that way you will assume the best results.