Ways to achieve realism in three-dimensional graphics. Technologies for realistic 3D images Stages of creating a 3D image

To increase the realism of the display of textures superimposed on polygons, various technologies are used:

Anti-aliasing

· MIP – mapping;

· texture filtration.

Anti-aliasing technology

Anti-aliasing is a technology used in image processing to eliminate the effect of “stepped” edges (Aliasing) of objects. With the raster method of image formation, it consists of pixels. Because pixels have a finite size, so-called stairs or stepped edges can be discerned at the edges of 3D objects. The easiest way to minimize the staircase effect is to increase the screen resolution, thereby reducing the pixel size. But this path is not always possible. If you cannot get rid of the staircase effect by increasing the monitor resolution, you can use Anti-aliasing technology, which allows you to visually smooth out the staircase effect. The most commonly used technique for this is to create a smooth transition from the line or edge color to the background color. The color of a point lying on the boundary of objects is determined as the average value of the colors of two boundary points.

There are several basic Anti-aliasing technologies. For the first time, the highest quality results were achieved by the full-screen anti-aliasing technology FSAA (Full Screen Anti-Aliasing). In some literature sources this technology is called SSAA. The essence of this technology is that the processor calculates an image frame at a much higher resolution than the screen resolution, and then, when displayed on the screen, averages the values ​​of a group of pixels to one; the number of averaged pixels corresponds to the monitor screen resolution. For example, if a frame with a resolution of 800x600 is antialiased using FSAA, the image will be calculated at a resolution of 1600x1200. When moving to the monitor resolution, the colors of the four calculated points corresponding to one monitor pixel are averaged. As a result, all lines have smooth color transition boundaries, which visually eliminates the staircase effect.

FSAA does a lot of unnecessary work, loading the GPU, anti-aliasing not the edges, but the entire image, which is its main drawback. To eliminate this drawback, a more economical technology was developed - MSSA.

The essence of the MSSA technology is similar to the FSAA technology, but no calculations are performed on the pixels located inside the polygons. For pixels on the boundaries of objects, depending on the level of smoothing, 4 or more additional points are calculated, from which the final color of the pixel is determined. This technology is the most widespread at present.

Individual developments of video adapter manufacturers are known. For example, NVIDIA has developed Coverage Sampling (CSAA) technology, which is supported only by GeForce video adapters starting from the 8th series (8600 - 8800, 9600 - 9800). ATI introduced AAA (Adaptive Anti-Aliasing) into the R520 GPU and all subsequent ones.

MIP mapping technology

The technology is used to improve the quality of texturing of three-dimensional objects. To make a 3D image look realistic, the depth of the scene must be taken into account. As you move away from the viewing point, the overlayed texture should look increasingly blurry. Therefore, when texturing even a homogeneous surface, not one, but several textures are most often used, which makes it possible to correctly take into account the perspective distortions of a three-dimensional object.

For example, it is necessary to depict a cobblestone street going deep into the scene. If you try to use just one texture along the entire length, then as you move away from the observation point, ripples or just one solid color may appear. The fact is that in this situation several texture pixels (texels) fall into one pixel on the monitor. The question arises: which one texel should you choose when displaying a pixel?

This problem is solved using MIP mapping technology, which implies the possibility of using a set of textures with varying degrees of detail. Based on each texture, a set of textures with a lower level of detail is created. The textures of such a set are called MIP maps.

In the simplest case of texture overlay, for each image pixel, the corresponding MIP map is determined according to the LOD (Level of Detail) table. Next, only one texel is selected from the MIP map, the color of which is assigned to the pixel.

Filtration technologies

Typically, MIP mapping technology is used in combination with filtering technologies designed to correct MIP texturing artifacts. For example, as an object moves further and further from the observation point, a transition occurs from a low MIP map level to a higher MIP map level. When an object is in a transition state from one MIP map level to another, a special type of visualization errors appears: clearly visible boundaries of the transition from one MIP map level to another.

The idea of ​​filtering is that the color of an object's pixels is calculated based on neighboring texture points (texels).

The first method of texture filtering was the so-called point sampling, which is not used in modern 3D graphics. Next was developed bilinear filtration. Bilinear filtering takes a weighted average of four adjacent texture pixels to display a surface point. With this filtering, the quality of slowly rotating or slowly moving objects with edges (such as a cube) is low (blurry edges).

More high quality gives trilinear filtering, in which to determine the color of a pixel, the average color value of eight texels is taken, four from two adjacent structures, and as a result of seven mixing operations, the color of the pixel is determined.

With the increasing performance of GPUs, it was developed anisotropic filtration, which is still used successfully today. When determining the color of a point, it uses a large number of texels and takes into account the position of polygons. The level of anisotropic filtering is determined by the number of texels that are processed when calculating the pixel color: 2x (16 texels), 4x (32 texels), 8x (64 texels), 16x (128 texels). This filtering ensures high quality of the output moving image.

All these algorithms are implemented by the graphics processor of the video card.

Application Programming Interface (API)

To speed up the execution of 3D pipeline stages, a 3D graphics accelerator must have a certain set of functions, i.e. hardware, without participation central processor, perform the operations necessary to construct a 3D image. The set of these functions is the most important characteristic of a 3D accelerator.

Since the 3D accelerator has its own set of commands, its effective use is only possible if the application program uses these commands. But, since there are many different models of 3D accelerators, as well as different application programs that generate three-dimensional images, a compatibility problem arises: it is impossible to write a program that would equally well use the low-level commands of various accelerators. It is obvious that application developers software and 3D accelerator manufacturers need a special package utilities, which performs the following functions:

efficient query transformation application program into an optimized sequence of low-level commands of the 3D accelerator, taking into account the features of its hardware design;

software emulation of the requested functions if the accelerator used does not have hardware support for them.

A special package of utilities to perform these functions is called application programming interface (ApplicationProgram Interface = API).

The API occupies an intermediate position between high-level application programs and low-level teams accelerator, which are generated by its driver. Using the API eliminates the need for the application developer to work with low-level accelerator commands, simplifying the process of creating programs.

Currently, there are several APIs in 3D, the areas of application of which are quite clearly delineated:

DirectX, developed by Microsoft, used in gaming applications running Windows 9X and later operating systems;

OpenGL, used mainly in professional applications (computer-aided design systems, three-dimensional modeling systems, simulators, etc.) running under the control operating system Windows NT;

Branded (native) APIs, created by 3D accelerator manufacturers exclusively for their Chipset in order to most effectively use their capabilities.

DirectX is a strictly regulated, closed standard that does not allow changes until the next one is released. new version. This, on the one hand, limits the capabilities of program developers and especially accelerator manufacturers, but it greatly facilitates the user’s setup of software and hardware for 3D.

Unlike DirectX, the OpenGL API is built on the concept of an open standard, with a small core set of functions and many extensions that implement more complex functions. The 3D accelerator Chipset manufacturer is required to create a BIOS and drivers that perform basic Open GL functions, but is not required to provide support for all extensions. This gives rise to a number of problems associated with manufacturers writing drivers for their products, which are supplied both in full and in truncated form.

Full version The OpenGL-compatible driver is called ICD (Installable Client Driver). It provides maximum performance, because... contains low-level codes that provide support not only for the basic set of functions, but also for its extensions. Naturally, taking into account the OpenGL concept, creating such a driver is an extremely complex and time-consuming process. This is one of the reasons for the higher cost of professional 3D accelerators compared to gaming ones.

Constructing realistic images involves both physical and psychological processes. Light, that is, electromagnetic energy, after interacting with the environment, enters the eye, where, as a result of physical and chemical reactions, electrical impulses are generated that are perceived by the brain. Perception is an acquired property. The human eye is very a complex system. It has an almost spherical shape with a diameter of about 20 mm. It is known from experiments that the sensitivity of the eye to the brightness of light varies according to a logarithmic law. The limits of sensitivity to brightness are extremely wide, on the order of 10 10 , but the eye is not able to simultaneously perceive this entire range. The eye responds to a much smaller range of values ​​relative to brightness, distributed around the level of adaptation to illumination.

The speed of adaptation to brightness is not the same for different parts of the retina, but is nevertheless very high. The eye adjusts to the “average” brightness of the scene being viewed; therefore, an area of ​​constant brightness (intensity) against a dark background appears brighter or lighter than against a light background. This phenomenon is called simultaneous contrast.

Another property of the eye that is relevant to computer graphics is that the edges of a region of constant intensity appear brighter, causing regions of constant intensity to be perceived as having variable intensity. This phenomenon is called the Mach band effect, named after the Austrian physicist Ernest Mach who discovered it. The Mach band effect is observed when the slope of the intensity curve changes abruptly. If the intensity curve is concave, then in this place the surface appears lighter, if it is convex, it appears darker. (Figure 1.1)

Rice. 1.1. Mach band effect: (a) piecewise linear intensity function, (b) intensity function with continuous first derivative.

1.1 Simple lighting model.

Light energy incident on a surface can be absorbed, reflected or transmitted. Partially it is absorbed and converted into heat, and partially reflected or transmitted. An object can only be seen if it reflects or transmits light; if an object absorbs all the incident light, then it is invisible and is called a completely black body. The amount of energy absorbed, reflected, or transmitted depends on the wavelength of the light. When illuminated with white light, in which the intensity of all wavelengths is reduced approximately equally, the object appears gray. If almost all the light is absorbed, the object appears black, and if only a small part of it appears white. If only certain wavelengths are absorbed, the light coming from an object changes its energy distribution and the object appears colored. The color of an object is determined by the wavelengths it absorbs.

The properties of reflected light depend on the structure, direction and shape of the light source, the orientation and properties of the surface. Light reflected from an object can also be diffuse or specular. Diffuse reflection of light occurs when light appears to penetrate beneath the surface of an object, is absorbed, and then re-emitted. In this case, the position of the observer does not matter, since diffusely reflected light is scattered evenly in all directions. Specular reflection occurs from the outer surface of an object.

Fig.1.2. Lambertian diffuse reflection

The surface of objects depicted using a simple Lambertian diffuse reflection lighting model (Figure 1.2) appears faded and matte. The source is assumed to be a point source, so objects that are not directly hit by light appear black. However, objects in real scenes are also hit by diffuse light reflected from the surrounding environment, for example from the walls of a room. Scattered light corresponds to a distributed source. Since the calculation of such sources requires large computational costs, in computer graphics they are replaced by the scattering coefficient.

Let two objects be given, identically oriented relative to the source, but located on at different distances from him. If you find their intensity using this formula, it will turn out to be the same. This means that when objects overlap, they cannot be distinguished, although the intensity of the light is inversely proportional to the square of the distance from the source, and the object further away from it should be darker. If we assume that the light source is at infinity, then the diffuse term of the lighting model will vanish. In the case of a perspective transformation of a scene, the distance from the center of the projection to the object can be taken as the proportionality coefficient for the diffuse term.

But if the center of the projection lies close to the object, then for objects lying at approximately the same distance from the source, the intensity difference is excessively large. Experience shows that greater realism can be achieved with linear attenuation. In this case, the lighting model looks like this (Fig. 1.3.)

Fig.1.3. Mirror reflection.

If the observation point is assumed to be at infinity, then it is determined by the position of the object closest to the observation point. This means that the nearest object is illuminated with the full intensity of the source, while more distant objects are illuminated with reduced intensity. For colored surfaces, the lighting model is applied to each of the three primary colors.

Thanks to specular reflection, light reflections appear on shiny objects. Because the specularly reflected light is focused along the reflection vector, the highlights also move when the observer moves. Moreover, since light is reflected from the external surface (with the exception of metals and some solid dyes), the reflected beam retains the properties of the incident one. For example, shining white light on a shiny blue surface produces white highlights rather than blue highlights.

Transparency

Basic lighting models and hidden line and surface removal algorithms only consider opaque surfaces and objects. However, there are also transparent objects that transmit light, for example, such as a glass, vase, car window, water. When passing from one medium to another, for example, from air to water, a light ray is refracted; therefore, a stick sticking out of the water appears bent. Refraction is calculated using Snell's law, which states that the incident and refracting rays lie in the same plane, and the angles of incidence and refraction are related by the formula.

No substance transmits all incident light; some of it is always reflected; this is also shown in (Fig. 1.4.)

Fig. 1.4. Geometry of refraction.

Just like reflection, transmission can be specular (directional) or diffuse. Directional transmission is characteristic of transparent substances, such as glass. If you look at an object through such a substance, then, with the exception of the contour lines of curved surfaces, no distortion will occur. If light is scattered when passing through a substance, then we have diffuse transmission. Such substances appear translucent or matte. If you look at an object through such a substance, it will appear blurry or distorted.

Shadows

If the positions of the observer and the light source coincide, then shadows are not visible, but they appear when the observer moves to any other point. An image with built-in shadows looks much more realistic, and besides, shadows are very important for modeling. For example, an area of ​​particular interest to us may be invisible due to the fact that it falls into the shadow. In applied areas - construction, spacecraft development, etc. - shadows affect the calculation of incident solar energy, heating and air conditioning.

Observations show that the shadow consists of two parts: penumbra and full shadow. The full shadow is the central, dark, sharply defined part, and the penumbra is the lighter part surrounding it. In computer graphics, point sources are usually considered to produce only complete shadows. Distributed light sources of finite size are created. both shadow and penumbra: in total shadow there is no light at all, while partial shade is illuminated by part of a distributed source. Due to high computational costs, as a rule, only the total shadow formed by a point light source is considered. The complexity and, therefore, the cost of calculations also depends on the location of the source. It's easiest when the source is at infinity and the shadows are defined using orthographic projection. It is more difficult if the source is located at a finite distance, but outside the field of view; perspective projection is necessary here. The most difficult case is when the source is in sight. Then you need to divide the space into sectors and look for shadows separately for each sector.

To construct shadows, you essentially need to remove invisible surfaces twice: for the position of each source and for the position of the observer or viewing point, i.e., it is a two-step process. Consider the scene in Fig. 1.5. One source is located at infinity from above: in front to the left of the parallelepiped. The observation point lies in front: top right of the object. In this case, shadows are formed in two ways: they are their own shadow and a projection shadow. A shadow is obtained when the object itself prevents light from passing onto some of its faces, for example, onto the right side of a parallelepiped. In this case, the algorithm for constructing shadows is similar to the algorithm for removing non-facial faces: faces shaded by their own shadow are non-facial if the observation point is combined with a light source.

Fig. 1.5.Shadows.

If one object prevents light from reaching another, then a projection shadow is obtained, for example a shadow on the horizontal plane in (Fig. 1.5, b.) To find such shadows, you need to construct projections of all non-facial faces onto the scene. The center of the projection is at the light source. The intersection points of the projected face with all other planes form polygons, which are marked as shadow polygons and entered into the data structure. In order not to introduce too many polygons into it, you can project the outline of each object, rather than individual faces.

After adding shadows to the data structure, as usual, a view of the scene from a given observation point is constructed. Note that to create different views, you do not need to recalculate the shadows, since they depend only on the position of the source and do not depend on the position of the observer.

Development of algorithms

The founders of computer graphics developed a certain concept: to form a three-dimensional image based on a set of geometric shapes. Typically, triangles are used for this purpose, less often - spheres or paraboloids. Geometric shapes are solid, and the foreground geometry obscures the background geometry. Then the time came for the development of virtual lighting, thanks to which flat shaded areas appeared on virtual objects, giving computer images clear contours and a somewhat man-made appearance.

Henry Gouraud proposed averaging the coloring between corners to produce a smoother image. This form of anti-aliasing requires minimal computation and is currently used by most graphics cards. But at the time of its invention in 1971, computers could only render simple scenes in this way.

In 1974, Ed Catmull introduced the concept of the Z-buffer, which was that an image could be made up of horizontal (X) and vertical (Y) elements, each of which also had depth. This speeded up the process of removing hidden edges and is now standard in 3D accelerators. Another of Catmull's inventions was wrapping a two-dimensional image around three-dimensional geometry. Projecting texture onto a surface is the main way to give a realistic appearance to a three-dimensional object. Initially, objects were uniformly painted in one color, so that, for example, creating a brick wall required individual modeling of each brick and the filling between them. These days, you can create such a wall by assigning a bitmap image of a brick wall to a simple rectangular object. This process requires minimal computation and computer resources, not to mention a significant reduction in operating time.

By Tong Fong improved the Gouraud anti-aliasing principle by interpolating the shades of the entire surface of the polygon, and not just the areas adjacent directly to the faces. Although rendering in this case is one hundred times slower than with the previous anti-aliasing option, the resulting objects get the “plastic” appearance inherent in early computer animation. Maya uses two Phong coloring options.

James Blinn combined elements of Phong painting and texture projection to create the relief texture in 1976. If Phong smoothing has been applied to a surface and a texture map can be projected onto it, why not use shades of gray according to the face normal directions to create a bump effect? Lighter shades of gray are perceived as hills, and darker shades of gray are perceived as depressions. The geometry of the object remains unchanged, and you can see its silhouette.

Blinn also developed a method of using environmental maps to generate reflections. He proposed creating a cubic environment by rendering six projections from the center of an object. The images obtained in this way are then projected back onto the object, but with fixed coordinates, as a result of which the image does not move with the object. As a result, the surface of the object will reflect the environment. To successfully implement the effect, it is necessary that there is no rapid movement of environmental objects during the animation process. In 1980, Turner Whitted proposed a new imaging technique called tracing. This is tracking the paths of individual light rays from the light source to the camera lens, taking into account their reflection from objects in the scene and refraction in transparent media. Although the implementation of this method requires a significant amount of computer resources, the image is very realistic and neat.

In the early 80s, when computers began to be used more often in various fields of activity, attempts began to use computer graphics in the entertainment field, including cinema. This required special hardware and super-powerful computers, but a start had been made. By the mid-1980s, SGI began producing high-performance workstations for scientific research and computer graphics.

Alias ​​was founded in Toronto in 1984. This name has two meanings. Firstly, it translates as “pseudonym”, because in those days the founders of the company were forced to work part-time. Secondly, the term is used to describe the jagged edges of an image in computer graphics. Initially, the company focused on producing software. designed for modeling and developing complex surfaces. Then Power Animator was created, a powerful and expensive product that many manufacturers considered the best available at that time.

In 1984, Wavefront was founded in Saita Barbara. This name literally translates to wave front. The company immediately began developing 3D visual effects software and producing graphics for Showtime, Bravo, and National Geographic Explorer. The first application created by Wave-front was called Preview. Then in 1988 the Softimage program was released, which quickly gained popularity in the market for products designed for working with computer graphics. All the software and hardware used to create animation in the 80s was specialized and very expensive. By the end of the 1980s, there were only a few thousand people in the world working in visual effects modeling. Almost all of them worked on computers manufactured by Silicon Graphics and used software from Wavefront, Softimage, etc.

Thanks to the advent of personal computers, the number of people creating computer animation began to grow. IBM PC, Amiga, Macintosh, and even Atari began developing software for 3D image processing. In 1986, AT&T released the first package for working with animation on personal computers, which was called TOPAS. It cost $10,000 and ran on computers with an Intel 286 processor and the DOS operating system. Thanks to these computers, it became possible to create free animation, despite the primitive graphics and relatively low speed calculations. The following year, Apple Macintosh released another PC-based 3D graphics system called Electric Image. In 1990, AutoDesk began selling 3D Studio, a product created by the Yost Group, an independent team that developed graphics products for Atari. The cost of 3D Studio was only $3,000, which in the eyes of personal computer users made it a worthy competitor to the TOPAS package. Another year later, NewTek's Video Toaster appeared along with the easy-to-use LightWave program. To work with them, Amiga computers were needed. These programs were in great demand in the market and sold thousands of copies. By the early 90s, the creation of computer animation became available to a wide range of users. Everyone could experiment with animation and tracing effects. It's now possible to download Stephen Coy's Vivid program for free, which allows you to reproduce tracing effects, or the Persistence of Vision Raytracer program, better known as POVRay. The latter provides children and novice users with a wonderful opportunity to get acquainted with the basics of computer graphics.

Films with stunning special effects demonstrate a new stage in the development of computer graphics and visualization. Unfortunately, most users believe that creating impressive animations depends entirely on the power of the computer. This misconception still exists today.

As the 3D graphics application market has grown and competition has increased, many companies have combined their technologies. In 1993, Wavefront merged with Thompson Digital Images, which used NURBS curve modeling and interactive visualization. These features later became the basis for interactive photorealistic rendering in Maya. In 1994, Microsoft bought the Softimage program and released a version of this product for Windows NT platforms based on Pentium computers. This event can be considered the beginning of the era of inexpensive and accessible to the average personal computer user programs for working with three-dimensional graphics. In response, SGI bought and merged Alias ​​and Wavefront in 1995 to prevent the loss of interest in applications that ran exclusively on SGI's dedicated computers. Almost immediately, a new company called Alias] Wavefront began combining the technologies at its disposal to create an entirely new program. Finally, Maya was released in 1998, costing between $15,000 and $30,000, for the IRIX operating system on SGI workstations. The program was written from scratch and offered a new way to develop animation with an open application programming interface (API) and enormous expandability. Despite SGI's original intention to retain exclusive rights to provide the environment for Maya, a version for Windows NT was released in February 1999. The old pricing scheme has been scrapped, and the base Maya package now costs just $7,500. Maya 2 appeared in April of the same year, and Maya 2.5 appeared in November, containing the Paint Effects module. In the summer of 2000, Maya 3 was released, to which the ability to create non-linear animation using the Tmax (Video Editing) tool was added. In early 2001, versions of Maya for Linux and Macintosh were announced, and in June Maya 4 began shipping for IRIX and Windows NT/2000.

Maya is a program for creating 3D graphics and animation based on models created by the user in virtual space, illuminated by virtual light sources and shown through virtual camera lenses. There are two main versions of the program: Maya Complete (costing $7,500 at the time of writing) and Maya Unlimited (costing $16,000), which included some specific features. Maya runs on PCs running the Windows NT/2000 operating system, as well as on Linux, IRIX, or even Macintosh operating systems. The program allows you to create photorealistic raster images, similar to those you get with a digital camera. At the same time, work on any scene begins with empty space. This parameter can be made to change over time, resulting in an animated scene after rendering a set of frames.

Maya is superior to many available in this moment on the market of packages for working with 3D animation. The program is used to create effects in a large number of films, has a wide range of applications in the areas we have listed above, and is considered one of the best in the field of animation, despite its difficulty in learning it. Currently, Maya's main competitors are LightWave, Softimage XSI and 3ds max, which cost between $2,000 and $7,000. Programs that cost less than $1,000 include trueSpace, Inspire 3D, Cinema 4D, Vgoose, and Animation Master.

Most of these programs work well on personal computers and have versions for various operating systems, such as the Macintosh. It is quite difficult to conduct a comparative analysis of them, but basically, the more complex the program, the more complex animation it allows you to create and the easier it is to model complex objects or processes.

Construction of a three-dimensional image

With the growth of computing power and the availability of memory elements, with the advent of high-quality graphic terminals and output devices, a large group of algorithms and software solutions have been developed that allow the formation of an image on the screen that represents a certain three-dimensional scene. The first such solutions were intended for architectural and mechanical engineering design problems.

When forming a three-dimensional image (static or dynamic), its construction is considered within a certain coordinate space, which is called stage. The scene involves working in a three-dimensional, three-dimensional world - which is why the direction is called three-dimensional (3-Dimensional, 3D) graphics.

Separate objects composed of geometric volumetric bodies and sections of complex surfaces are placed on the stage (most often, so-called B-splines). To form an image and perform further operations, surfaces are divided into triangles - minimal flat figures - and are subsequently processed precisely as a set of triangles.

At the next stage “ world” coordinates of grid nodes are recalculated using matrix transformations into coordinates species, i.e. depending on the point of view of the scene. View point position is usually called camera position.

Preparation system workspace
Blender 3D graphics (example from the site
http://www.blender.org
)

After formation frame(“wire mesh”) is performed painting over- giving the surfaces of objects certain properties. The properties of a surface are primarily determined by its light characteristics: luminosity, reflectance, absorptivity and scattering ability. This set of characteristics allows you to determine the material whose surface is being modeled (metal, plastic, glass, etc.). Transparent and translucent materials have a number of other characteristics.

Typically, during this procedure, you will also cutting off invisible surfaces. There are many methods for performing such cutting, but the most popular method has become
Z-buffer
, when an array of numbers is created indicating “depth” - the distance from a point on the screen to the first opaque point. The next surface points will be processed only when their depth is smaller, and then the Z coordinate will decrease. The power of this method directly depends on the maximum possible distance of a scene point from the screen, i.e. on the number of bits per point in the buffer.

Calculation of a realistic image. Performing these operations allows you to create so-called solid models objects, but this image will not be realistic. To form a realistic image, sources of light and is executed illumination calculation every point of visible surfaces.

To give objects realism, the surface of objects is “covered” texture - image(or the procedure that forms it), determining the nuances of appearance. The procedure is called “texture mapping”. During texture application, stretching and smoothing techniques are applied - filtration. For example, anisotropic filtering, mentioned in the description of video cards, does not depend on the direction of texture transformation.

After determining all the parameters, it is necessary to perform the image formation procedure, i.e. calculating the color of dots on the screen. The calculation procedure is called rendering.When performing such a calculation, it is necessary to determine the light falling on each point of the model, taking into account the fact that it can be reflected, that the surface can block other areas from this source, etc.

There are two main methods used to calculate illumination. The first one is the method reverse ray tracing. With this method the trajectory of those rays that ultimately hit the screen pixels is calculated- in reverse. The calculation is carried out separately for each of the color channels, since light of different spectrums behaves differently on different surfaces.

Second method - emissivity method - involves calculating the integral luminosity of all areas falling into the frame and the exchange of light between them.

The resulting image takes into account the specified camera characteristics, i.e. Viewers.

Thus, as a result of a large number of calculations, it becomes possible to create images that are difficult to distinguish from photographs. To reduce the number of calculations, they try to reduce the number of objects and, where possible, replace the calculation with photography; for example, when forming the background of an image.

Solid model and the final result of the model calculation
(example from the site http://www.blender.org)

Animation and virtual reality

The next step in the development of 3D realistic graphics technologies was the possibility of animation - movement and frame-by-frame changes in the scene. Initially, only supercomputers could handle such a volume of calculations, and they were used to create the first three-dimensional animation videos.

Later, hardware specifically designed for computing and imaging was developed - 3D accelerators. This made it possible to perform such formation in a simplified form in real time, which is what is used in modern computer games Oh. In fact, now even ordinary video cards include such tools and are a kind of mini-computers for a narrow purpose.

When creating games, filming films, developing simulators, in tasks of modeling and designing various objects, the task of forming a realistic image has another significant aspect - modeling not just the movement and changes of objects, but modeling their behavior, corresponding to the physical principles of the surrounding world.

This direction, taking into account the use of all kinds of hardware for transmitting the influences of the outside world and increasing the effect of presence, is called virtual reality.

To implement such realism, special methods are created for calculating parameters and transforming objects - changes in the transparency of water due to its movement, calculation of the behavior and appearance of fire, explosions, collisions of objects, etc. Such calculations are quite complex, and a number of methods have been proposed for their implementation in modern programs.

One of them is processing and use shaders - procedures that change illumination(or exact position)at key points according to some algorithm. This processing allows you to create the effects of a “glowing cloud”, “explosion”, increase the realism of complex objects, etc.

Interfaces for working with the “physical” component of image formation have appeared and are being standardized - which makes it possible to increase the speed and accuracy of such calculations, and therefore the realism of the created model of the world.

3D graphics is one of the most spectacular and commercially successful areas of development information technologies, it is often cited as one of the main drivers of hardware development. Three-dimensional graphics tools are actively used in architecture, mechanical engineering, scientific work, filming, computer games, and teaching.

Examples of software products

Maya, 3DStudio, Blender

The topic is very attractive to students of any age and arises at all stages of studying a computer science course. The attractiveness for students is explained by the large creative component in practical work, the visual result, as well as the broad applied focus of the topic. Knowledge and skills in this area are required in almost all sectors of human activity.

In basic school, two types of graphics are considered: raster and vector. The issues of distinguishing one species from another are discussed, as a consequence - positive sides and disadvantages. The areas of application of these types of graphics will allow you to enter the names of specific software products that allow you to process this or that type of graphics. Therefore, materials on the topics: raster graphics, color models, Vector graphics- will be in greater demand in primary schools. In high school, this topic is supplemented by consideration of the features of scientific graphics and the possibilities of three-dimensional graphics. Therefore, the following topics will be relevant: photorealistic images, modeling of the physical world, compression and storage of graphic and streaming data.

Most of the time is spent on practical preparation and processing work. graphic images using raster and vector graphic editors. In basic school, this is usually Adobe Photoshop, CorelDraw and/or MacromediaFlach. The difference between the study of certain software packages in basic and high school is manifested to a greater extent not in the content, but in the forms of work. In basic school, this is practical (laboratory) work, as a result of which students master the software product. In high school, the main form of work becomes an individual workshop or project, where the main component is the content of the task at hand, and the software products used to solve it remain only a tool.

The tickets for primary and secondary schools contain questions related to both theoretical foundations computer graphics and practical skills in processing graphic images. Such parts of the topic as calculating the information volume of graphic images and features of graphics coding are present in the control measurement materials of the unified state exam.

You are probably reading this article on the screen of a computer monitor or mobile device - a display that has real dimensions, height and width. But when you watch, for example, the cartoon Toy Story or play a game Tomb Raider, you see a three-dimensional world. One of the most amazing things about a 3D world is that the world you see could be the world we live in, the world we will live in tomorrow, or a world that lives only in the minds of the movie or game creators. And all these worlds can appear on only one screen - this is at least interesting.
How does a computer trick our eyes into thinking that when we look at a flat screen we see the depth of the picture being presented? How do game developers ensure that we see real characters moving around in a real landscape? Today I will tell you about the visual tricks used by graphic designers and how it all is designed and seems so simple to us. In fact, everything is not simple, and to find out what 3D graphics is, go to the cut - there you will find a fascinating story, which, I am sure, you will immerse yourself in with unprecedented pleasure.

What makes an image three-dimensional?

An image that has, or appears to have, height, width, and depth is three-dimensional (3D). A picture that has height and width but no depth is two-dimensional (2D). Remind me where you find two-dimensional images? - Almost everywhere. Remember even regular symbol on the toilet door, indicating a stall for one gender or another. The symbols are designed in such a way that you can recognize and recognize them at a glance. That's why they only use the most basic forms. More detailed information about a symbol can tell you what kind of clothes that little person hanging on the door is wearing, or the color of their hair, such as the symbolism of a women's restroom door. This is one of the main differences between the way 3D and 2D graphics are used: 2D graphics are simple and memorable, while 3D graphics use more detail and pack significantly more information into a seemingly ordinary object.

For example, triangles have three lines and three angles - all that is needed to tell what the triangle consists of and what it represents in general. However, look at the triangle from the other side - a pyramid is a three-dimensional structure with four triangular sides. Please note that in this case there are already six lines and four corners - this is what the pyramid consists of. See how an ordinary object can become three-dimensional and contain much more information needed to tell the story of a triangle or pyramid.

For hundreds of years, artists have used some visual tricks that can make a flat 2D image seem like a window into the real 3D world. You can see a similar effect in a regular photograph that you can scan and view on a computer monitor: objects in the photograph appear smaller when they are further away; objects close to the camera lens are in focus, which means, accordingly, everything behind the objects in focus is blurred. Colors tend to be less vibrant if the subject is not as close. When we talk about 3D graphics on computers today, we're talking about images that move.

What is 3D graphics?

For many of us, games are personal computer, mobile device or in general an advanced gaming system is the most striking example and common way through which we can contemplate three-dimensional graphics. All of these computer generated games and cool movies must go through three basic steps to create and present realistic 3D scenes:

  1. Creating a virtual 3D world
  2. Determining which part of the world will be shown on the screen
  3. Determining what a pixel on the screen will look like so that the full image appears as realistic as possible
Creating a virtual 3D world
The virtual 3D world is, of course, not the same as the real world. Creating a virtual 3D world is a complex work on computer visualization of a world similar to the real one, the creation of which uses a large number of tools and which implies extremely high detail. Take, for example, a very small part of the real world - your hand and the desktop underneath it. Your hand has special qualities that determine how it can move and appear externally. The finger joints bend only towards the palm, and not opposite to it. If you hit the table, no action will happen to it - the table is solid. Accordingly, your hand cannot pass through your desktop. You can prove that this statement is true by looking at something natural, but in the virtual three-dimensional world things are completely different - in the virtual world there is no nature, there are no natural things like your hand, for example. Objects in the virtual world are completely synthetic - these are the only properties given to them using software. Programmers use special tools and design 3D virtual worlds with extreme care to ensure that everything behaves in a certain way at all times.

What part of the virtual world is shown on the screen?
At any given time, the screen shows only a tiny part of the virtual 3D world created for the computer game. What is shown on the screen are certain combinations of ways in which the world is defined, where you make decisions about where to go and what to see. No matter where you go - forward or backward, up or down, left or right - the virtual 3D world around you determines what you see when you are in a certain position. What you see makes sense from one scene to the next. If you look at an object from the same distance, regardless of direction, it should appear high. Every object must look and move in such a way that you believe that it has the same mass as the real object, that it is as hard or soft as the real object, and so on.


The programmers who write computer games put a lot of effort into designing 3D virtual worlds and making them so that you can wander around without encountering anything that makes you think, “That couldn't happen in this world!” The last thing you want to see is two solid objects that can pass right through each other. This is a stark reminder that everything you see is a sham. The third step involves at least as many more calculations as the other two steps and must also occur in real time.

Lighting and perspective

When you enter a room, you turn on the light. You probably don't spend a lot of time wondering how it actually works and how the light comes from the lamp and travels around the room. But people working with 3D graphics, have to think about this because all surfaces, surrounding frames and other such things need to be illuminated. One method, ray tracing, involves sections of the path that light rays take as they leave a light bulb, bounce off mirrors, walls and other reflective surfaces, and finally land on objects with varying intensities from different angles. This is difficult, because one light bulb can produce one beam, but in most rooms several light sources are used - several lamps, ceiling lamps(chandeliers), floor lamps, windows, candles and so on.

Lighting plays a key role in two effects that give appearance, weight and external strength of objects: shading and shadows. The first effect, shading, is where more light falls on an object from one side than from the other. The shading gives the subject a lot of naturalism. This shading is what makes the folds in the blanket deep and soft and the high cheekbones appear striking. These differences in light intensity reinforce the overall illusion that an object has depth as well as height and width. The illusion of mass comes from the second effect - shadow.

Solids cast shadows when light falls on them. You can see this when you observe the shadow that a sundial or tree casts on the sidewalk. Therefore, we are accustomed to seeing real objects and people casting shadows. In 3D, shadow again reinforces the illusion, creating the effect of being in the real world rather than in a screen of mathematically generated shapes.

Perspective
Perspective is one word that can mean many things, but actually describes a simple effect that everyone saw. If you stand on the side of a long, straight road and look into the distance, it appears as if both sides of the road converge at one point on the horizon. Also, if trees are close to the road, trees further away will appear smaller than trees closer to you. In fact, the trees will appear to converge at a certain point on the horizon formed near the road, but this is not the case. When all the objects in a scene appear to end up converging on one point in the distance, this is perspective. There are many variations of this effect, but most 3D graphics use the same point of view that I just described.

Depth of field


Another optical effect successfully used to create three-dimensional graphic objects is depth of field. Using my example with trees, in addition to the above, there is another interesting thing. If you look at trees close to you, trees further away will appear to be out of focus. Film directors and computer animators use this effect, depth of field, for two purposes. The first is to enhance the illusion of depth in the scene the user is viewing. The second purpose is that directors' use of depth of field focuses their attention on the subjects or actors that are considered most important. To draw your attention to someone other than the film's heroine, for example, a "shallow depth of field" may be used, where only the actor is in focus. A scene that is designed to give you a full impression will instead use "deep depth of field" to keep as many objects as possible in focus and thus visible to the viewer.

Smoothing


Another effect that also relies on tricking the eye is anti-aliasing. Digital graphics systems very good for creating clear lines. But it also happens that diagonal lines have the upper hand (they appear quite often in the real world, and then the computer reproduces lines that are more reminiscent of ladders (I think you know what a ladder is when you examine the image object in detail)). So, to trick your eye into seeing a smooth curve or line, the computer can add certain shades of color to the rows of pixels surrounding the line. With this “gray color” of pixels, the computer actually deceives your eyes, and meanwhile you think that there are no more jagged steps. This process of adding extra colored pixels to trick the eye is called anti-aliasing, and it is one of the techniques that are manually created by 3D computer graphics. Another challenging task for a computer is creating 3D animation, an example of which will be presented to you in the next section.

Real examples

When all the tricks I've described above are used together to create a stunningly real scene, the result lives up to the effort. The latest games, movies, and machine-generated objects are combined with photographic backgrounds to enhance the illusion. You can see amazing results when you compare photos and a computer generated scene.

The photo above shows a typical office that uses the sidewalk as an entrance. In one of the following photographs, a simple plain ball was placed on the sidewalk and the scene was photographed. The third photo represents the use of a computer graphics program, which actually created the ball that actually does not exist in this photo. Can you tell there are any significant differences between these two photographs? I think no.

Creating animation and live action appearances

So far we've looked at tools that make any digital image appear more realistic - whether the image is a still or part of an animation sequence. If it's an animated sequence, then programmers and designers will use even more different visual tricks to make it look like it's "live action" rather than computer-generated images.

How many frames per second?
When you go to see a blockbuster movie at the local cinema, the sequence of images called frames runs at 24 frames per second. Since our retinas retain an image for slightly longer than 1/24 of a second, most people's eyes will blend the frames into one continuous image of movement and action.

If you don't understand what I just wrote, let's look at it this way: this means that each frame of a movie is a photograph taken at a shutter speed (exposure) of 1/24 of a second. Thus, if you look at one of the many frames of a racing movie, you will see that some of the racing cars are "blurred" because they were driven at high speed while the camera was open. This blurriness of things created by fast movement is what we are used to seeing, and it is part of what makes an image real to us when we look at it on a screen.


However, digital 3D images are not photographs after all, so no blurring effect occurs when the subject moves in the frame during shooting. To make images more realistic, blur must be explicitly added by programmers. Some designers believe that it takes more than 30 frames per second to "overcome" this lack of natural blur, which is why games have been pushed to the next level - 60 frames per second. While this allows each individual image to appear in great detail and display moving objects in smaller increments, it significantly increases the number of frames for a given animated action sequence. There are other certain pieces of imagery where accurate computer rendering must be sacrificed for the sake of realism. This applies to both moving and stationary objects, but that's a completely different story.

Let's come to the end


Computer graphics continues to amaze the whole world by creating and generating a wide variety of truly realistic moving and non-moving objects and scenes. From 80 columns and 25 lines of monochrome text, graphics have advanced significantly, and the result is clear - millions of people play games and run a wide variety of simulations with today's technology. New 3D processors will also make their presence felt - thanks to them, we will be able to literally explore other worlds and experience things we never dared to try before. real life. Finally, back to the ball example: how was this scene created? The answer is simple: the image has a computer-generated ball. It's not easy to say which of the two is genuine, is it? Our world is amazing and we must live up to it. I hope you found it interesting and learned another piece of interesting information.

3D modeling and visualization are necessary when manufacturing products or their packaging, as well as when creating product prototypes and creating 3D animation.

Thus, 3D modeling and visualization services are provided when:

  • need assessment of physical and technical features products even before their creation in the original size, material and configuration;
  • it is necessary to create a 3D model of the future interior.

In such cases, you will definitely have to resort to the services of specialists in the field of 3D modeling and visualization.

3D models- an integral component of high-quality presentations and technical documentation, as well as the basis for creating a product prototype. The peculiarity of our company is the ability to carry out a full cycle of work to create a realistic 3D object: from modeling to prototyping. Since all work can be carried out in a complex, this significantly reduces the time and costs of searching for performers and setting new technical specifications.

If we are talking about a product, we will help you release a trial series and set up further production, small-scale or industrial scale.

Definition of the concepts “3D modeling” and “visualization”

3D graphics or 3D modeling- computer graphics, combining the techniques and tools necessary to create three-dimensional objects in technical space.

Techniques should be understood as methods of forming three-dimensional graphic object- calculation of its parameters, drawing of a “skeleton” or a three-dimensional non-detailed form; extrusion, extension and cutting of parts, etc.

And under the tools are professional 3D modeling programs. First of all - SolidWork, ProEngineering, 3DMAX, as well as some other programs for volumetric visualization of objects and space.

Volume rendering is the creation of a two-dimensional raster image based on the constructed 3D model. At its core, this is the most realistic image of a three-dimensional graphic object.

Applications of 3D modeling:

  • Advertising and Marketing

Three-dimensional graphics are indispensable for the presentation of a future product. In order to start production, you need to draw and then create a 3D model of the object. And, based on the 3D model, using rapid prototyping technologies (3D printing, milling, silicone mold casting, etc.), a realistic prototype (sample) of the future product is created.

After rendering (3D visualization), the resulting image can be used when developing packaging design or when creating outdoor advertising, POS materials and exhibition stand design.

  • Urban planning

Using three-dimensional graphics, the most realistic modeling of urban architecture and landscapes is achieved - with minimal costs. Visualization of building architecture and landscape design allows investors and architects to experience the effect of presence in the designed space. This allows you to objectively assess the merits of the project and eliminate the shortcomings.

  • Industry

Modern production cannot be imagined without pre-production modeling of products. With the advent of 3D technologies, manufacturers have the opportunity to significantly save materials and reduce financial costs for engineering design. Using 3D modeling, graphic designers create three-dimensional images of parts and objects, which can later be used to create molds and prototypes of the object.

  • Computer games

3D technology has been used in the creation of computer games for more than ten years. In professional programs, experienced specialists manually draw three-dimensional landscapes, models of characters, animate created 3D objects and characters, and also create concept art (concept designs).

  • Cinema

The entire modern film industry is focused on cinema in 3D format. For such filming, special cameras are used that can shoot in 3D format. In addition, with the help of 3D graphics, individual objects and full-fledged landscapes are created for the film industry.

  • Architecture and interior design

3D modeling technology in architecture has long established itself with best side. Today, creating a three-dimensional model of a building is an indispensable design attribute. Based on the 3D model, you can create a building prototype. Moreover, both a prototype, repeating only the general outlines of the building, and a detailed prefabricated model of the future structure.+

As for interior design, using 3D modeling technology, the customer can see what his home or office space will look like after renovation.

  • Animation

Using 3D graphics, you can create an animated character, “make” him move, and also, by designing complex animation scenes, create a full-fledged animated video.

Stages of developing a 3D model

The development of a 3D model is carried out in several stages:

1. Modeling or creating model geometry

We are talking about creating a three-dimensional geometric model, without taking into account the physical properties of the object. The following techniques are used:

  • extrusion;
  • modifiers;
  • polygonal modeling;
  • rotation.

2. Texturing an object

The level of realism of the future model directly depends on the choice of materials when creating textures. Professional programs for working with three-dimensional graphics, there are practically no limitations in the possibilities for creating realistic images.

3. Setting up light and observation point

One of the most difficult stages when creating a 3D model. After all, the realistic perception of the image directly depends on the choice of light tone, brightness level, sharpness and depth of shadows. In addition, it is necessary to select an observation point for the object. This can be a bird's eye view or scaling the space to achieve the effect of being present in it - by choosing a view of the object from a height of human height.+

4. 3D visualization or rendering

The final stage of 3D modeling. It consists of detailing the display settings of the 3D model. That is, adding graphic special effects such as glare, fog, shine, etc. In the case of video rendering, the exact parameters of 3D animation of characters, details, landscapes, etc. are determined. (time of color changes, glow, etc.).

At the same stage, the visualization settings are detailed: the required number of frames per second and the extension of the final video are selected (for example, DivX, AVI, Cinepak, Indeo, MPEG-1, MPEG-4, MPEG-2, WMV, etc.). If it is necessary to obtain a two-dimensional raster image, the image format and resolution are determined, mainly JPEG, TIFF or RAW.

5. Post-production

Processing captured images and videos using media editors - Adobe Photoshop, Adobe Premier Pro (or Final Cut Pro/ Sony Vegas), GarageBand, Imovie, Adobe After Effects Pro, Adobe Illustrator, Samplitude, SoundForge, Wavelab, etc.

Post-production involves giving media files original visual effects, the purpose of which is to excite the minds of a potential consumer: to impress, arouse interest and be remembered for a long time!

3D modeling in foundry

In foundry production, 3D modeling is gradually becoming an indispensable technological component of the product creation process. If we are talking about casting into metal molds, then 3D models of such molds are created using 3D modeling technologies, as well as 3D prototyping.

But casting in silicone molds is gaining no less popularity today. In this case, 3D modeling and visualization will help you create a prototype of an object, on the basis of which a mold will be made from silicone or other material (wood, polyurethane, aluminum, etc.).

3D visualization methods (rendering)

1. Rasterization.

One of the most simple methods rendering. When using it, additional visual effects (for example, the color and shadow of an object relative to the observation point) are not taken into account.

2. Raycasting.

The 3D model is viewed from a certain, predetermined point - from a human height, a bird's eye view, etc. Rays are sent from the observation point that determine the light and shade of the object when it is viewed in the usual 2D format.

3. Ray tracing.

This rendering method means that when it hits a surface, the ray is divided into three components: reflected, shadow and refracted. This actually forms the color of the pixel. In addition, the realism of the image directly depends on the number of divisions.

4. Path tracing.

One of the most complex methods 3D visualizations. When using this 3D rendering method, the propagation of light rays is as close as possible to the physical laws of light propagation. This is what ensures the high realism of the final image. It is worth noting that this method differs in resource intensity.

Our company will provide you with a full range of services in the field of 3D modeling and visualization. We have all the technical capabilities to create 3D models of varying complexity. We also have extensive experience in 3D visualization and modeling, which you can personally verify by studying our portfolio, or our other works not yet presented on the site (upon request).

Brand agency KOLORO will provide you with services for the production of a trial series of products or its small-scale production. To do this, our specialists will create the most realistic 3D model of the object you need (packaging, logo, character, 3D sample of any product, casting mold, etc.), on the basis of which a prototype of the product will be created. The cost of our work directly depends on the complexity of the 3D modeling object and is discussed individually.