Unit 66 - 3D Modelling
Friday, 19 June 2015
Tuesday, 28 April 2015
HA7 - Task 6: Constraints
Polygon Count & File Size
There are two common ways to measure an object's file size, the polygon and the vertex count. A game character could be from 400 polygons up to 40k+ polygons. Mobile games would have a lot less polygons for a character than a PC game.
When someone talks about a 'poly count' in a game, they are actually talking about the 'triangle count'. Almost all games use triangles instead of polygons, because most modern hardware for graphics is built to render triangles quickly. Modelling applications show the polygon count of an object, but it is misleading because the triangle count would be much higher.
Polygons are still useful; models that are made up mostly of 4-sided polygons work well with tools like 'Edge-loop Selection' and transform, which speed up modelling. When an object is put into a game engine, the polygons are automatically converted into triangles, but some tools can create different layouts for the triangles. When this is done, artists should carefully check the object to make sure the polygons were converted correctly.
The vertex count is more important when considering performance and memory, but it is more common for artists to use the triangle count for measuring performance. This doesn't matter too much as long as all of the triangles are connected to each other. 1 triangle would have 3 vertices, 2 would have 4, etc.
Changes in smoothing, shading, or materials in each triangle, get treated as physical breaks in the surface of the model, so the triangles need to be duplicated so the model can be sent to the graphics card in render-able pieces. Too many changes will lead to a much larger vertex count, and can slow performance.
Rendering Time
The final process for creating the 2D image or animation is rendering. It is comparable to taking photos or filming a completed setup in the real world. There are multiple methods of rendering, and are often specialized. For polygon-based scenes, a non-realistic wireframe type of rendering would be used. There are other ways to do this, such as scanline rendering or ray tracing.
The time it takes to render something can vary from either a few seconds to a few days.
Rendering for video games and animations is done in real time, and is done from between 20 - 120 frames per second. The goal of real-time rendering is to show as much information as can be processed by the human eye in a fraction of a second. It is also to create the highest possible photo-realism at the most acceptable minimum rendering speed, which is 24fps (minimum amount that is needed to create the illusion of movement on the screen). Certain exploits can be used to make the final image tolerable for the human eye. Rendering software can be used to create effects for lens flares, motion blur or depth of field.
Non real-time rendering is done for non-interactive scenes, such as film or TV. Scenes that aren't rendered in real-time can have higher quality images than in real-time. The time it takes to render films/TV could take from a fraction of a second, to a few days. Rendered frames are stored on hard disks and can then be transferred to things such as motion picture film or an optical disk.
When photo-realism is the end goal, the most basic methods to do this are effects like ray tracing. There are other techniques to be used, like particle systems, volumetric sampling and light ripples.
The process of rendering can be really expensive, because of the complexity of the processes being simulated. The processing power of computers have increased a lot over the years, which allows for more realistic rendering. Film studios usually have things called 'render farms' so they can generate images quicker. Fallout costs for hardware makes it easier for people to make 3D animations in their homes.
There are two common ways to measure an object's file size, the polygon and the vertex count. A game character could be from 400 polygons up to 40k+ polygons. Mobile games would have a lot less polygons for a character than a PC game.
![]() |
(Source) |
Polygons are still useful; models that are made up mostly of 4-sided polygons work well with tools like 'Edge-loop Selection' and transform, which speed up modelling. When an object is put into a game engine, the polygons are automatically converted into triangles, but some tools can create different layouts for the triangles. When this is done, artists should carefully check the object to make sure the polygons were converted correctly.
The vertex count is more important when considering performance and memory, but it is more common for artists to use the triangle count for measuring performance. This doesn't matter too much as long as all of the triangles are connected to each other. 1 triangle would have 3 vertices, 2 would have 4, etc.
Changes in smoothing, shading, or materials in each triangle, get treated as physical breaks in the surface of the model, so the triangles need to be duplicated so the model can be sent to the graphics card in render-able pieces. Too many changes will lead to a much larger vertex count, and can slow performance.
Rendering Time
The final process for creating the 2D image or animation is rendering. It is comparable to taking photos or filming a completed setup in the real world. There are multiple methods of rendering, and are often specialized. For polygon-based scenes, a non-realistic wireframe type of rendering would be used. There are other ways to do this, such as scanline rendering or ray tracing.
The time it takes to render something can vary from either a few seconds to a few days.
Rendering for video games and animations is done in real time, and is done from between 20 - 120 frames per second. The goal of real-time rendering is to show as much information as can be processed by the human eye in a fraction of a second. It is also to create the highest possible photo-realism at the most acceptable minimum rendering speed, which is 24fps (minimum amount that is needed to create the illusion of movement on the screen). Certain exploits can be used to make the final image tolerable for the human eye. Rendering software can be used to create effects for lens flares, motion blur or depth of field.
(Source) |
When photo-realism is the end goal, the most basic methods to do this are effects like ray tracing. There are other techniques to be used, like particle systems, volumetric sampling and light ripples.
The process of rendering can be really expensive, because of the complexity of the processes being simulated. The processing power of computers have increased a lot over the years, which allows for more realistic rendering. Film studios usually have things called 'render farms' so they can generate images quicker. Fallout costs for hardware makes it easier for people to make 3D animations in their homes.
HA7 - Task 5: 3D Development Software
3D Studio Max
This is a piece of 3D computer graphics software that is used to create 3D images, models or animations. It was developed by Autodesk Media/Entertainment. This software is usually used by video game developers, TV studios and architectural studios. It can also be used for movie effects or pre-visualization. The latest version now includes shaders, particle systems, normal map creation/rendering, a customizable UI and a scripting language.
Maya
(Autodesk) Maya is a piece of 3D computer graphics software that was originally developed by Alis Systems Corporation, and runs on Windows, Mac and Linux. It can be used to create interactive 3D apps, like video games, animated films or TV.
Lightwave
Lightwave is a package of software that is used for rendering 3D images, whether they are animated or completely static. It also includes a rendering engine that can support features such as realistic reflection/refraction. The 3D modelling software can be used to create both polygonal models, and can subdivide surfaces. The animation part has features for reverse and forward kinematics, and particle systems.
Blender
Blender is a piece of completely free and open source 3D computer graphics software that is used for creating animated films, interactive 3D apps or video games. Blender can be used for 3D modelling, UV unwrapping, texturing, skinning, particle simulation, camera tracking, rendering and video editing. Included with the software is a built-in game engine.
Cinema 4D
Cinema 4D is an application for modelling, animating and rendering, and was developed by MAXON Computer GmbH. It can be used for polygonal/subdividing modelling, animation, rendering, texturing or lighting.
There are multiple versions of Cinema 4D: 'Prime', which is a broadcast version that included extra motion-graphics features, 'Visualize', which is used for architectural design, and 'Studio', which includes everything.
ZBrush
ZBrush is a sculpting tool that combines 3D modelling, painting and texturing. It uses 'pixol' technology which stores information for lighting, material, colour etc. for all the objects on the screen. It is used as a digital sculpting tool, and can create high-res models to be used in movies, animations or video games.
Sketchup
Sketchup is a piece of 3D modelling software that is used for architectural, mechanical, film or video game design. It has both a free and a 'professional' version.
File Formats
Every 3D application lets the user save their work in a 'proprietary' file format, and then export it into open formats.
Proprietary formats are a mode of presenting data, and are the intellectual property or whoever has ownership of the format itself.
An Open/Free format is a format that is either not owned as an intellectual property, or just not recognised as one. Proprietary formats can either be open (published) or closed (trade secrets). Open/Free formats are always open.
Examples:
Proprietary: 3D Studio Max (.3ds), Maya (.mb)
Open: .obj, .dae
This is a piece of 3D computer graphics software that is used to create 3D images, models or animations. It was developed by Autodesk Media/Entertainment. This software is usually used by video game developers, TV studios and architectural studios. It can also be used for movie effects or pre-visualization. The latest version now includes shaders, particle systems, normal map creation/rendering, a customizable UI and a scripting language.
(Source) |
(Autodesk) Maya is a piece of 3D computer graphics software that was originally developed by Alis Systems Corporation, and runs on Windows, Mac and Linux. It can be used to create interactive 3D apps, like video games, animated films or TV.
![]() |
(Source) |
Lightwave is a package of software that is used for rendering 3D images, whether they are animated or completely static. It also includes a rendering engine that can support features such as realistic reflection/refraction. The 3D modelling software can be used to create both polygonal models, and can subdivide surfaces. The animation part has features for reverse and forward kinematics, and particle systems.
(Source) |
Blender is a piece of completely free and open source 3D computer graphics software that is used for creating animated films, interactive 3D apps or video games. Blender can be used for 3D modelling, UV unwrapping, texturing, skinning, particle simulation, camera tracking, rendering and video editing. Included with the software is a built-in game engine.
(Source) |
Cinema 4D
Cinema 4D is an application for modelling, animating and rendering, and was developed by MAXON Computer GmbH. It can be used for polygonal/subdividing modelling, animation, rendering, texturing or lighting.
There are multiple versions of Cinema 4D: 'Prime', which is a broadcast version that included extra motion-graphics features, 'Visualize', which is used for architectural design, and 'Studio', which includes everything.
(Source) |
ZBrush is a sculpting tool that combines 3D modelling, painting and texturing. It uses 'pixol' technology which stores information for lighting, material, colour etc. for all the objects on the screen. It is used as a digital sculpting tool, and can create high-res models to be used in movies, animations or video games.
(Source) |
Sketchup
Sketchup is a piece of 3D modelling software that is used for architectural, mechanical, film or video game design. It has both a free and a 'professional' version.
(Source) |
Every 3D application lets the user save their work in a 'proprietary' file format, and then export it into open formats.
Proprietary formats are a mode of presenting data, and are the intellectual property or whoever has ownership of the format itself.
An Open/Free format is a format that is either not owned as an intellectual property, or just not recognised as one. Proprietary formats can either be open (published) or closed (trade secrets). Open/Free formats are always open.
Examples:
Proprietary: 3D Studio Max (.3ds), Maya (.mb)
Open: .obj, .dae
HA7 - Task 4: Mesh Construction
Polygon Modelling
Polygon meshes are collections of faces, edges and vertices that define the shape of objects within 3D graphics modelling. The faces of these are usually made up of triangles or quadrilaterals, or some other types of polygons.
Primitive Modelling
Polygon meshes are collections of faces, edges and vertices that define the shape of objects within 3D graphics modelling. The faces of these are usually made up of triangles or quadrilaterals, or some other types of polygons.
(Source) |
Common methods of creating polygonal meshes is by connecting multiple primitives together, which are predefined meshes created by the modelling software. Primitives commonly include cubes, spheres, pyramids, cylinders and 2D primitives like squares or triangles.
(Source) |
Box Modelling
Box modelling is a popular method of creating meshes, and uses only 2 simple tools:
Subdivide - Splits face/edges to make smaller pieces, by adding new vertices to it. A square would get subdivided into 4 smaller squares by this tool.
Extrude - Is applied to faces. It creates a completely new face that is the same size and shape, and is connected to the edges of the original face. Extruding a square would create a cube.
Another method of creating meshes is called inflation/extrusion modelling. For this method, a 2D shape is created which traces the outline of something from an image. Then a second image of a different angle is used and the 2D shape is extruded while following the 2nd shape's outline. This method is common for creating objects such as heads, and artists may create half of a head then duplicate it and flip it and connect the pieces.
Sketch Modelling
This method used a user-friendly interface to quickly sketch models with lower detail than other methods.
3D Scanners
3D scanners are used to make meshes of real life object in high detail. Scanners are very expensive, and are mostly used by professionals.
HA7 - Task 3: Geometric Theory
Cartesian Co-ordinate System
The Cartesian Co-ordinate System was invented in the 17th century by René Descartes, and it revolutionized maths by providing the first proper link between Euclidean geometry and algebra.
Geometric Theory & Polygons
The basic object that is used in modeling is called a vertex, which is a point in a 3D space. Two vertices are connected by a line, and it becomes an edge. 3 connected vertices form a triangle, which is the simplest polygon that can be made in the eulcidean space. Triangles together can form more complex shapes, such as 'quads' like squares.
A group of polygons that are connected by vertices are called 'meshes', also known as a wireframe model.
To make it look better, none of the polygons must cross over each other. It should also not include any doubled vertices or edges. It's also preferred that a mesh doesn't have any errors in it, like doubled edges, vertices, etc. Sometimes it is important for meshes to not have any holes in them.
Primitives
Within 3D applications, some objects are pre-made, and can be used to make models out of them. The most basic shapes are called the Common Primitive, and can be anything from a basic cube to a pyramid. These shapes are used as the beginning points of modelling.
Surfaces
Once polygons are created, they can be made into surfaces, so they can be coloured or textured to create the right look.
The Cartesian Co-ordinate System was invented in the 17th century by René Descartes, and it revolutionized maths by providing the first proper link between Euclidean geometry and algebra.
(Source) |
Computers draw 2D vector artwork by plotting points on the X and Y axis, and then joining the points with lines. The shapes can then be filled with colours and the lines made thicker, or given colours. 3D programs do the same, but have an addition axis called Z.
Geometric Theory & Polygons
The basic object that is used in modeling is called a vertex, which is a point in a 3D space. Two vertices are connected by a line, and it becomes an edge. 3 connected vertices form a triangle, which is the simplest polygon that can be made in the eulcidean space. Triangles together can form more complex shapes, such as 'quads' like squares.
(Source) |
To make it look better, none of the polygons must cross over each other. It should also not include any doubled vertices or edges. It's also preferred that a mesh doesn't have any errors in it, like doubled edges, vertices, etc. Sometimes it is important for meshes to not have any holes in them.
Primitives
Within 3D applications, some objects are pre-made, and can be used to make models out of them. The most basic shapes are called the Common Primitive, and can be anything from a basic cube to a pyramid. These shapes are used as the beginning points of modelling.
Surfaces
Once polygons are created, they can be made into surfaces, so they can be coloured or textured to create the right look.
HA7 - Task 2: Displaying 3D Polygon Animations
Graphical rendering is done by the Central Processing Unit (CPU), and Graphics Processing Unit (GPU). The CPU tells the GPU what to render, for example lighting or shadows.
API
Game engines use software called Application Programming Interfaces (API). An API is made up of a set of routines or protocols, and tools for making software applications. Good APIs provide all the pieces for making a program, to make it easier.
Direct3D
Direct3D is an API that is used to manipulate and display 3D objects. It's developed by Microsoft, and gives programmers a way to develop 3D programs. Almost all PCs support Direct3D.
OpenGL
OpenGL is a 3D graphics language. There are 2 different versions, Microsoft OpenGL which was developed by microsoft, and Cosmo OpenGL which was developed by Silicon Graphics.
The Microsoft version is built into Windows and was made to improve performance. The Cosmo version is software only and specifically designed for machines without a graphics accelerator.
Graphics Pipeline
The graphics pipeline (also called the rendering pipeline), is the process of creating a 2D raster image on a 3D scene. Once a 3D model has been created, it must be converted into what the monitor will display. Some examples of graphics pipelines are OpenGL and DirectX/Direct3D.
Direct3D
Direct3D is an API that is used to manipulate and display 3D objects. It's developed by Microsoft, and gives programmers a way to develop 3D programs. Almost all PCs support Direct3D.
OpenGL
OpenGL is a 3D graphics language. There are 2 different versions, Microsoft OpenGL which was developed by microsoft, and Cosmo OpenGL which was developed by Silicon Graphics.
The Microsoft version is built into Windows and was made to improve performance. The Cosmo version is software only and specifically designed for machines without a graphics accelerator.
Graphics Pipeline
The graphics pipeline (also called the rendering pipeline), is the process of creating a 2D raster image on a 3D scene. Once a 3D model has been created, it must be converted into what the monitor will display. Some examples of graphics pipelines are OpenGL and DirectX/Direct3D.
Stages of a Graphics Pipeline
![]() |
(Source) |
3D Geometric primitives
The scene is created using primitives, usually triangles because they exist only on one plane.
Modelling and Transformation
It is transformed from the local co-ordinate system to a 3D co-ordinate system.
Camera Transformation
Then It's transformed from the 3D co-ordinate system to THE 3D camera co-ordinate system.
Lighting
The scene is then lit up according to how light the colours are and how reflective the object is. An example would be a completely white object on a black background, the lighting would need to be adjusted for that to be seen.
Projection Transformation
It is transformed from 3D co-ordinates to a 2D camera view. Distant object will be made smaller, and the camera will focus on the central object.
Clipping
Any of the primitives that are outside of view won't be visible, and will be removed.
Scan Conversion or Rasterization
The image is converted to a raster format, and made up of pixels. Then individual pixels can be altered, which is a very complex step.
Texturing, Fragment Shading
The individual fragments get given colors based on values given during the rasterization stage.
HA7 - Task 1: Application Of 3D
Video games with 3D graphics became popular in 1992-1993, with the releases of Virtua Racing and Virtua Fighter, but it originally started with the game 3D Monster Maze in 1981.
![]() |
(Source) |
At this point in time, games began shifting towards 3D. Games like Crash Bandicoot and Super Mario 64 are examples of this. Being full 3D was a big selling point for many of these games, and focus was less on side-scrollers and other 2D games.
The change to 3D made a lot more things possible in video games, and many new games were much more complex than their 2D predecessors. The Legend of Zelda: Ocarina of Time was nothing like the previous games, which were all 2D, and GoldenEye 007 was one of the first examples of a move towards hyper-realistic games.
Nowadays, the majority of triple-A games are 3D, with most 2D games being made by indie developers. Many games these days strive for hyper-realism, and most are expected to have some realism in them.
A recent example of a hyper-realistic game is The Last of Us. In addition to improved graphics, many of the characters' movements were done with motion-capture.
3D in TV/Films
Films these days include a lot of 3D imagery, and the first major use of it was back in 1993, with the release of the film Jurassic Park. Most of the dinosaurs that are seen in the film were created using 3D CGI. More recent examples of CGI in films are Avatar, or Planet of The Apes, which heavily featured CGI within them.
Some films are made up completely of 3D animations, like Toy Story and Monsters Inc.
Many TV shows, including children's shows, are made using 3D imagery. Shows like Star Wars: The Clone Wars, and Jimmy Neutron.
3D in Education
Gaia 3D is software that helps people teach and learn. It has 3D models that can be used in many different subjects, biology, chemistry, geography, etc.
![]() |
(Source) |
Doctors, and other people in the medical field, can use 3D models of body parts so they can easily see, and show to others, certain features, or damaged areas.
3D in Engineering
Engineers can use software to create 3D models of machines they would work on, and see how it operates and how it is made. It can also help them to spot any mistakes or issues it might have.
3D in Architecture
Architects can use specialist software to create 3D models of their building plans, so that it is easier for people to understand it.
3D in Product Design
3D models can be created to design products, or show them off before the actual product is made. Some products could be actually made using a 3D printer to produce the 3D model on the computer.
Subscribe to:
Posts (Atom)