There are two common ways to measure an object's file size, the polygon and the vertex count. A game character could be from 400 polygons up to 40k+ polygons. Mobile games would have a lot less polygons for a character than a PC game.
![]() |
| (Source) |
Polygons are still useful; models that are made up mostly of 4-sided polygons work well with tools like 'Edge-loop Selection' and transform, which speed up modelling. When an object is put into a game engine, the polygons are automatically converted into triangles, but some tools can create different layouts for the triangles. When this is done, artists should carefully check the object to make sure the polygons were converted correctly.
The vertex count is more important when considering performance and memory, but it is more common for artists to use the triangle count for measuring performance. This doesn't matter too much as long as all of the triangles are connected to each other. 1 triangle would have 3 vertices, 2 would have 4, etc.
Changes in smoothing, shading, or materials in each triangle, get treated as physical breaks in the surface of the model, so the triangles need to be duplicated so the model can be sent to the graphics card in render-able pieces. Too many changes will lead to a much larger vertex count, and can slow performance.
Rendering Time
The final process for creating the 2D image or animation is rendering. It is comparable to taking photos or filming a completed setup in the real world. There are multiple methods of rendering, and are often specialized. For polygon-based scenes, a non-realistic wireframe type of rendering would be used. There are other ways to do this, such as scanline rendering or ray tracing.
The time it takes to render something can vary from either a few seconds to a few days.
Rendering for video games and animations is done in real time, and is done from between 20 - 120 frames per second. The goal of real-time rendering is to show as much information as can be processed by the human eye in a fraction of a second. It is also to create the highest possible photo-realism at the most acceptable minimum rendering speed, which is 24fps (minimum amount that is needed to create the illusion of movement on the screen). Certain exploits can be used to make the final image tolerable for the human eye. Rendering software can be used to create effects for lens flares, motion blur or depth of field.
| (Source) |
When photo-realism is the end goal, the most basic methods to do this are effects like ray tracing. There are other techniques to be used, like particle systems, volumetric sampling and light ripples.
The process of rendering can be really expensive, because of the complexity of the processes being simulated. The processing power of computers have increased a lot over the years, which allows for more realistic rendering. Film studios usually have things called 'render farms' so they can generate images quicker. Fallout costs for hardware makes it easier for people to make 3D animations in their homes.

No comments:
Post a Comment