Datenbank Glossar – CGI
||Glossar Special Effects|
Dieses Glossar wurde von Balamuralikrishna Nanduri verfasst
Hier der Link zu seiner Webpage. www.balafx.com
Computer-generated imagery (also known as CGI) is the application of the field of computer graphics or, more specifically, 3D computer graphics to special effects in films, television programs, commercials, simulators and simulation generally, and printed media. Video games usually use real-time computer graphics (rarely referred to as CGI) but may also include pre-rendered „cut scenes“ and intro movies that would be typical CGI applications.
CGI is used for visual effects because computer generated effects are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single artist to produce content without the use of actors, expensive set pieces, or props.
Computer software such as 3ds Max, Blender, LightWave 3D, Maya and Softimage XSI is used to make computer-generated imagery for movies, etc. Recent availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional grade films, games, and fine art from their home computers. This has brought about an Internet subculture with its own set of global celebrities, clichés, and technical vocabulary.
Simulators, particularly flight simulators, and simulation generally, make extensive use of CGI techniques for representing the Outside World.
There are three popular ways to represent a model:
Polygonal modeling – Points in 3D space, called vertices, are connected by line segments to form a polygonal mesh. Used for example by 3DS Max. The vast majority of 3D models today are built as textured polygonal models, because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons.
modeling – NURBS Surfaces are defined by spline curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point will pull the curve closer to that point. NURBS are truly smooth surfaces, not approximations using small flat surfaces, and so are particularly suitable for organic modelling. Maya is the most well-known commercial software that uses NURBS natively.
Splines&Patches modeling – Like NURBS, Splines and Patches depend on curved lines to define the visible surface. Patches fall somewhere between NURBS and polygons in terms of flexibility and ease of use.
The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of modeling techniques, including:
constructive solid geometry
A typical 3D character can be made up of many surfaces and components. To ensure that the character animates in the way that you want, it is important to carefully plan the process of character setup.
Character setup or rigging is the general term used for the preparation of 3D models with their accompanying joints and skeletons for animation.
Depending on the model to be animated, character setup can involve the following techniques:
Creating a skeleton with joints that acts as a framework for the 3D character model. You set limits on the joints so they rotate in a convincing manner. When you animate the character, you will be posing the character via its joints using either forward or inverse kinematic techniques (FK or IK).
Binding the 3D surfaces to the skeleton so that they move together. The process of binding may also include defining how the character’s joints bend or how the skin surfaces bulge to simulate muscles.
Defining and setting constraints for particular animated attributes in order to restrict the range of motion or to control an attribute based on the movement of another.
Grouping surface components such as CVs into sets called clusters so that parts of the character can be animated at a more detailed level.
When you set a keyframe (or key), you assign a value to an object’s attribute (for example, translate, rotate, scale, color, etc.) at a specific time.
Most animation systems use the frame as the basic unit of measurement because each frame is played back in rapid succession to provide the illusion of motion.
The frame rate (frames per second) that is used to play back an animation is based on the medium that the animation will be played back (for example, film, TV, video game, etc.)
When you set several keys at different times with different values, Maya generates the attribute values between those times as the scene plays back each frame. The result is the movement or change over time of those objects and attributes.
Texture maps let you modify the appearance of your 3D models and scenes in Maya. Texture maps are images you apply and accurately position onto your surfaces using a process called texture mapping. When an image is texture mapped onto a surface, it alters the appearance of the surface in some unique way. Texture maps let you create many interesting visual effects:
You can apply labels and logos to your surfaces using texture maps.
You can apply surface relief details and features to a surface instead of having to model the details on the surface directly.
You can use illustrations as texture maps to create interesting backdrops in your scenes.
Most shading attributes for a surface material can be altered by a texture map. (For example, color, specular, transparency, and reflectivity are examples of attributes that can be modified by a texture map.
Texture mapping is a key component in the 3D production workflow. Many production environments employ texture artists whose only role is to create and apply the texture maps to 3D models
There are several techniques for texture mapping 3D surfaces depending on the surface type (NURBS, polygons, subdivision surfaces). Some techniques involve preparing the surfaces for texture mapping. For example, when texture mapping polygonal and subdivision surface types you need to understand how textures are applied using UV texture coordinates.
UV texture coordinates, or UVs as they are more commonly called, are two-dimensional coordinates that reside with the vertex component information for a 3D surface. UVs control the placement of a texture map on a 3D model by correlating the pixel position of the 2D texture map to the vertex positions on the model, so that the texture gets positioned (mapped) correctly.
For NURBS surfaces which have an inherent rectangular topology, the UV texture coordinates are implicit. That is, the UVs reside in the same location as the control vertices, so have a natural correlation to a rectangular shaped texture map.
For polygonal and subdivision surface type models, which have an arbitrary surface topology, the UVs can be explicitly created and modified to suit the requirements of the texture map.
In this lesson you’ll learn the basic principles of UVs by applying (mapping) an existing image (texture) to a simple polygonal model and creating and modifying the UV texture coordinates so that the texture map appears correctly on the surface.
In this lesson you learn how to:
Assign a 2D texture map to a polygonal model.
Map UV texture coordinates (UVs) to a polygonal surface.
Correlate the UVs between the scene view and the UV Texture Editor.
Use the UV Texture Editor to visualize how the UV texture coordinates from a three-dimensional model relate to an assigned two-dimensional texture map.
Determine basic UV layout requirements.
Set preferences to let you visualize the texture borders in both the UV Texture Editor and the scene view to better understand how the texture map is placed on the polygonal model.
Select and reposition UV texture coordinates within the UV Texture Editor using the transformation tools to make the UVs match a pre-defined texture map.
Sew UV texture coordinates to existing UV shells.