Course overview Course overview
Create polished characters for film and games
The objective of this class is to understand character creation for film and game cinematics in terms of a characters profile, and its ultimate purpose in a composition or narrative. Each week we will focus on a different aspect of character construction and explore the technical ways of assembling a final character. We will begin by analyzing and assessing various reference materials prior to initial sculpting. Then the class will dive into the fabrication of hard surface components, clothing, and other accessories needed to finish out/polish a high-poly asset. From there, the emphasis will then switch over to surfacing your character using advance painting and shading techniques. These will include look-development of skin, metal, and various other material shaders prior to outputting final renders. Finally, we will end the course with discussions on and demonstrations about color theory, render manipulation, and post processing.
Character Creation for Film/Cinematics WHAT YOU’LL LEARN
The more you know, the better.
Unleashing your creative potential
Pete is a Los Angeles based senior character artist with both video game and film experience. His skillset revolves around high to low poly modeling, texture painting and look development. He is currently working at Treyarch on Call of Duty : Black Ops 2. His past work includes Call of Duty: Black Ops, James Bond: Quantum of Solace, Underworld: Evolution and Night at the Museum. His softwares of choice are Maya, Mudbox, Zbrush and Photoshop.
Character Creation for Film/Cinematics Student gallery
winter TERM Registration
Oct 21, 2019 - Feb 3, 2020
Jorge Adnel Martin
Pete is very talented and has really helped me improve in the past weeks. I learned a lot and always provided very detailed explanations.
Pete is the best teacher I've ever had. I learned a lot in the Q&A sessions and his weekly feedback was very useful. There were things I've never done before this class, moments of frustration that I was able to solve thanks to his advice.
The instructor is passionate about teaching which helped me better understand the workflow.
Pete is an awesome instructor and really pushes you to improve your skill set. Answered questions immediately in the Q&A sessions and really cares about individual progress!
Companies that hire our students
environment design Benefits
What makes this learning experience unique?
Receive personal individual feedback on all submitted assignments from the industries best artist.
1+ Year Access
Enjoy over 365 days of full course access. This includes all lectures, feedback, and Live Q&A recordings.
Certificate of Completion
Earn a Certificate of Completion when you complete and turn in 80% of course assignments.
Learn anywhere, anytime, and at your own pace with our online courses.
Speak to an advisor
Need guidance or course recommendations? Let us help!
Show us your skills
Not sure if you have the skills, or are you proving you do? Show us.
Sculpting and Painting CG Characters
Interview with Diego Rodriguez
Diego Rodriguez did an amazing breakdown of his awesome character study.
My name is Diego Rodriguez, I’m a 3D Character Artist and I’m currently building my portfolio. I want to share with you a little breakdown of the character I did for CGMA’s course Character creation for Films/Cinematics with Pete Zoppi. I was fascinated by the Mursi tribe so I decided to base my character on them as it was a great chance to study organic sculpting, hard surface modeling, and clothing. I focused on the upper body because I wanted to have enough time during the course to complete a final render in Arnold.
Reference and blockout
In my opinion, it’s really important to spend a few hours gathering references and planning your workflow. I usually do paintovers to analyze the approaches I could take while modeling as this helps to save some time later because you will know beforehand the challenges you will face. I avoid human references with makeup or heavily edited because they can lead to confusion while sculpting the shapes. For clothes I try to find sewing patterns similar to the pieces I need to model, it’s much easier to use them in Marvelous Designer instead of doing it by trial and error. I use Pureref to group my references into high-resolution images so I can easily navigate through them while working on the character.
One of the first things I learned at the course was how important is to create a blockout of your character. Even though I did a very quick blockout, it helped me quite a lot in the next stages because I already knew how I wanted to position the assets.
Sculpting the body
I tend to start with a base mesh in Zbrush. I always try to keep the mesh as low as possible because I‘m constantly changing forms and it’s very easy to do so using Move Brush. For sculpting, I use ClayBuildup, ClayTubes, and the alternative smooth brush with low intensity because I can create subtle organic shapes like muscles or landmarks. I also use the basic material as I can change my lights and adjust them to the references.
Sometimes I like to break the shapes a little by using Surface Noise, baking everything on a layer so I can change the opacity later. I keep it very subtle, just enough to create an organic feel.
Through the first classes, I learned to test my early sculpts with a skin shader to see how it works as the shapes may be softened in the render and it’s very easy to modify the sculpt at this stage.
Texturing the skin
I’ve used texturing xyz maps for the skin, it’s incredible the amount of detail you can get from them. I learned how to project them in Mudbox and how to use “Sculpt using maps” to create different layers for the secondary and tertiary details. This way it’s very easy to modify the intensity through the opacity or using amplify, mask, and smooth brushes.
After the displacement was ready I used a combination of polarized maps and hand painting to create the base color for the skin. I also used the displacement information to darken the pores.
I created the war paint in Substance Painter. Even though it’s not the best option for UDIMs it’s very easy to create this kind of texture. I exported a high subdivision level from the sculpt in fbx format into painter and I baked the maps so I was able to use smart masks and materials. After that, I created a paint texture with some color variation and in this group, I used a black mask with fill and paint layers to mask the zones I wanted the paint to be on. You can export this mask to use later in the shaders.
With all the maps ready I created two shaders, one for the skin and another one for the paint. I used the Mix shader with the mask I exported from Substance Painter to blend both of them.
Creating the skin was very challenging for me and thanks to the feedback Pete gave to me I was able to achieve a nice result. I needed various attempts until I was happy with the pore detailing and I also did some tests comparing the default Subsurface with the new randomwalk setting.
Modeling the assets
At this stage, it’s very important to have good references as it can save you a lot of work. For example, I found a tusk section which was very useful to create the horns. I traced the section in Maya using Curve tool and I used Loft to create the model. The best part of this workflow is that you can use the history to generate horns with different shapes very quickly. Finally, I used a bend modifier to create the final curved shape of the horn.
I chose to use curves for the ropes. I positioned the curve in Maya and I extruded it using a plane. It’s important that the geometry has an edge loop running through the center. Then I exported the mesh into ZBrush where I created two polygroups so I could use Frame mesh to select the central edge loop. Finally, using an IMM brush I created the rope.
To break the edges I did an XGen pass with small and thin hairs using noise modifiers.
For complex models like the AK, I always try to find references from different angles to use them in Maya. I start with basic geometry like cubes or cylinders and then I use Multi-Cut, Extrude, Bridge, etc to create the details. I went pretty high poly because I already knew it was going to be close to the camera in the final shot.
Texturing the assets
I exported high-resolution meshes into Substance Painter using FBX format and I baked the maps I needed to start using smart masks. You have to be careful using UDIMs in this software so it’s important to be organized with the UVs. I decided to group them based on materials.
I really enjoyed this stage because it’s very fun to create clean materials and start adding dirt, scratches, edge wear, etc. I learned that it’s important to think about how the objects were made and the abuse they went through so I could achieve more realism. I also added white paint to some of the assets because the body and the hands of my character were covered in paint.
Substance Source has really nice materials you can use as a base for your own textures. For example, I’ve used the rifle stock material in the AK.
Lighting and render
I spent a lot of time trying different lighting setups to see how everything was working. I usually try to avoid front lights as much as possible as they tend to create plain shapes. On the other hand, side lights emphasize the shapes of your model and can create more interesting compositions.
I edited the final render in Photoshop using Camera RAW where I overexposed the image a little bit using the highlights and reduced the saturation to get a more realistic result. I also added a little bit of gaussian blur and noise to avoid the crystal clear details you get in a 3D render.
I enjoyed the process of creating this character as I discovered new techniques I’ll definitely use in my workflow from now on. I hope you enjoyed this breakdown, don’t hesitate to contact me if you have any questions. You can get in touch with me through my website, Instagram (@artofdiego), or Facebook (diegorodriguez3d).
My experience with CGMA.
I think the best part of this course was getting feedback from a professional artist with years of experience like Pete. Even though the weekly content was great, I got the most out of the Q&A live sessions where he did some great demonstrations in real time and answered all the questions we got. I can’t recommend it highly enough.
How to Bring X-23 to 3D
Interview with Mingshun Zhu
Mingshun Zhu talked about the way he worked with the 3d character of X-23. A lovely take on a beloved version of a movie character, based on comic heroes.
My name is Mingshun Zhu. I am a recent graduate from Michigan State University, and I am working as a 3D character artist.
I started doing 3D modeling back in 2014, and I am mostly self-taught. My obsession with photorealism in storytelling leads me toward film production. Five months ago, I enrolled a course on CGMA called Character Creation for Film and Cinematics by Pete Zoppi, and I’d like to talk about the things I learned from this course.
I decided to create a character that is simple in design, so I have extra time to learn and digest. X-23 from the movie Logan was perfect for a short-term study because of the ideal complexation.
Start by gathering as many references as I can. I then filtered out most of the images and kept the one that has the best demonstration of features.
Instead of going straight into the hi-poly sculpting or clothes simulation, I took the advice from Pete to get the base mesh done first. Making low-poly base mesh helps to define the scale and topology of the model for both referencing and sculpting purpose.
Continue with face sculpting in ZBrush. I prefer to achieve the correct facial anatomy before I get into likeness sculpting. Here are a few tips I learned from Pete. While sculpting the face, don’t push too hard and don ‘t admit too fast; subtle changes make big differences; keep refining and iterating the face in different angles and spots, and avoid adding too much expression on the face. Also, be very cautious of the age, gender and weight to get the facial features right. For example, little girls tend to have rounder chin and jaws and are wider at the eyes.
Clothes are generated using Marvelous Designer (MD). I used the base mesh I created earlier to get a rough layout of panels.
Be thoughtful of the relationship between folds and animation. Depending on the pipeline, inappropriate folds might cause deformation issue in animation.
After the simulation, you can either choose to retopologize the mesh using the Zremesher function in ZBrush or manually resurface the mesh. In my case, I used Zremesher to get the base topology done and then build on top of it using Zmodeler.
Before you start texturing, it might be helpful to think about the type of shot that the character is being used in. Close up shot may require more detail on the character than medium shot does.
I used various scans from TexturingXYZ and Surface Mimic for skin and cloth projection in Mudbox. Photo scans grant you fast and good results, but there might be no transition between sculpted and projected details. Thus, you may need to add extra wrinkles and folds to tie everything together.
Skin shader is a combination of Vray Fast SSS2 skin shader and a reflective material. The additional reflective material helps to control the oil, wetness of the skin.
Clothes shader is very straightforward but with a little twist. At a glancing angle, cloth looks brighter, softer than its base color. I can fake the desired look by blending the base material with a smooth edge material. SamplerInfo node determines which part of the cloth is facing the camera and which part of the mesh is facing away from the camera.
Hair was done using XGen in Maya. Reference is the key. I set up the hair in three layers. The first layer encompasses the style and shape of the hair. The second layer has less volume and higher noise value. The third layer only has a few strands of hair, and I manually placed them in desired places. The second and third layer helps to make hair looks natural and lively, but it is incredibly challenging to get right.
I hope this article is helpful to you. I am glad to take the course because it helped me to speed up the learning process. If you are interested in learning more about character creation for films, you can consider taking the Character Creation for Film and Cinematics by Pete Zoppi on CGMA.
Mingshun Zhu, CG Artist
Intro to Character Creation for Cinematics
Interview with Kamal Eldin
Kamal Eldin, talked about his amazing character and showed a step-by-step clothing, hard-surface and prop workflows in Marvelous Designer, Maya, ZBrush.
Hello. My name is Kamal Eldin, I’m a 3D artist from Egypt and I’ve been working in the visual effects and animation industry for around 9 years. Lately, I’ve been involved more in Real-time production and VR/AR. But my heart is always with photorealism, wherever it is.
Character creation is something that’s almost an inside calling for me personally. I love to tell stories and characters are a powerful vessel to tell them. You can tell a story through your character’s hair, clothes, the weathering or the smoothness of her/his boots… that’s probably the reason why we are all doing it, I guess.
Start of the Project Templar
I saw the concept art for this character and immediately fell under its spell. It was designed by Thomas Dubief, his designs are unique. This one he called ‘Templar’ and you can easily tell it’s a post-apocalypse theme. It was amazing to see how a single image can tell a very detailed and nuanced story not only about the character it features but the entire world and the era that the character lives in.
It was clear that this world is very bizarre and abnormal, yet, somehow mildly familiar. Through the dust and weathering, you can see that some features are still recognizable… and you can see how sharp of a twist this world has taken and how that the transformation of it left its imprint on the character’s face, helmet, armor, boots and of course her outlandish primary weapon.
One unique feature in this design is the contrast or juxtaposition between the decay in the image and the innocence, almost rather pristine look of the petite girl who fitted herself into such an otherworldly outfit.
And this was a unique motive for me to do this character, as it’s a deviation from the overly veteran, super sexy female warrior visual style that is too common in VFX and games.
Based on this reading of the design, I realize that the challenge here would be authoring the materials and textures for the character and her props.
The most important element was the ambiguity of certain elements in the concept. And I had to make a choice, either to eliminate the ambiguity of these elements altogether by strictly defining the surface materials or carrying some of that ambiguity into the final image. I preferred the latter. It challenges the senses more, makes your eyes linger on the piece and thus to see her story imprinted on her. And staying true to the spirit of the concept was a plus for me.
Being an old vertex pusher, I prefer to begin the creation process with a rough poly block in Maya.
Here I get to work the stance, proportion, silhouette and quickly and early get a rough idea of how the entire piece would look like as a whole.
In this stage, I solve the relative placement of elements, composition, relative sizes of props, how many pieces my outfit would be, what can be modeled symmetrically in its early stages and what can’t, which pieces are going to be contiguous and which will be split. And splitting or combining meshes based on the type of material or the number of elements sharing the same displacement map.
After the Blocking stage is finished, I then take the geo further dividing them into 3 categories:
- Pieces that will be modeled in Marvelous Designer: these will be left as they are and will be replaced by new Topology.
- Pieces that won’t be designed in Marvelous: for these the block geo will be developed into further in Maya with animation topology to be later detailed in ZBrush.
- Elements that require no Displacement maps: will be through roughly modeled in Maya to finish.
Then I start developing each piece according to its category to the next phase in the relative software.
Notes on Surface Detailing
I also tend to break down surface detail into 3 main categories:
- Details with the same color of its underlying surface: these I do in ZBrush in the sculpting phase and bake in the displacement map.
- Surface details that are colored differently than the parent surface, for example, cloth stitches. Texturing for these is done in Substance Painter and I bake in a normal map to take advantage of the capability of multi-channel painting in SP. For each stroke, I can paint height along with its color and specular value in one shot. This is an advantage over ZBrush here as you don’t need to do a workaround to mask the height details in your ZBrush sculpt to be able to color them differently later.
- Face surface details and diffuse: These are done in Mari provided I have access to XYZ Displacement maps and get back to ZBrush for a unified displacement map.
Mari’s layer system and adjustment/Filters are robust, and its projection tools are far more superior than ZBrush. also, you can get along well with Mudbox for projection. I don’t prefer ZBrush for this task due to its lacking layer system.
The head is done based on one of 3d.sk images that I had for years – their model Kamilia. I picked her because she was giving this pristine and innocent look that I’m looking for. I wanted to go as far as I can from the typical veteran warrior look.
- Starting from a UVed base mesh and doing initial modeling in Maya to grab the basic likeness.
- Switching to ZBrush for more sculpting and getting as close to her likeness as possible.
- Finishing the sculpt and refining the UVs in Maya.
- Switching to Mari, Projecting and painting Diffuse and projecting XYZ Disp maps.
- Exporting the projected displacement map from Mari (surface XYZ map).
- Exporting the original Displacement map from ZBrush (Form sculpting map).
- Combining the two maps via a shader network in Maya. For that, I recommend reading this article.
Here are the combined maps in ZBrush brought only for preview:
The Suit: Marvelous Workflow
The clothing was done in Marvelous, using the base female body as an avatar. The girl is petite in size, but the suit was a bit buffy, so:
- First, I used some of the Marvelous Designer’s stock patterns as an underlying outfit, and I froze that, then I designed the suit patterns on top.
- I prefer exporting a Quadrangulated mesh from Marvelous, this saves me a step ahead so that I don’t have to ZRemesh in ZBrush and reproject.
- I process the exported Hi Poly mesh in ZBrush to define and maintain my cloth Seams and Edge Creases.
- Retopologized the mesh in Maya via Quad Draw.
- Created UVs for the suit and divide them into 2 UDIMs.
- Then projected the Hipoly details on the mesh in ZBrush.
- In the end, a manual detail pass was added on top in ZBrush.
- Now the mesh is ready for 32-bit displacement map extraction.
The Gloves & Boots
- These were poly modeled in Maya then detailed in ZBrush.
- Left and Right gloves share a displacement map sequence in 2 UDIM tiles, the same for boots.
- 32-bit Displacement maps extracted in ZBrush.
If you are confused, the helmet is actually a torn soccer ball. This is really when you get to appreciate the ingenuity of this design.
There is a nice video tutorial for their credit showing how to model a soccer ball, I used pretty much the same technique with a little twist:
- For the floating flaps of the ball, I used Bend and Twist deformers in Maya to introduce some wearing effects.
- ZBrush used for detailing. This is when I realized doing stitches in ZBrush is not a good option for me.
- One 32-bit Displacement extracted.
Don’t do stitching in ZBrush!
Most of the stitching was left to be done in Substance Painter. It resulted in two things:
- Saving me from over subdividing the mesh (you probably will have to go to 20+ million polys for each subtool to get well-defined stitches, which is a no-brainer for me and my machine)
- Giving easy control over the stitches’ color and material properties
Hard Surface & Props Workflow
The Weapon: Maya Workflow
- The weapon handle is curvy and for this, I used Nurbs curves and the Birail, loft tools to preserve the smoothness of the surface. However, If I’d do this again today, I’d turn to Moi3D to for doing this.
- The rest of the Weapon was poly modeled.
The Wicker Shield: Maya Workflow
One of the main elements in the concept is this wicker shield. This was done by drawing curves and generating paint effects tubes on them, this resulted in:
- Having seamless control over the size
- Easy to control the twisting angle on all the wicker strands in one go via the paint effects properties
- Have ready UVs for free
You can dissect the wicker shield in 3 components, for each you create a curve and then generate paint effects tubes on them:
- The outer Beam > This I started off from a torus.
- The Radial Rips > create curves for each radial rib and generate paint effect tube for all the curves.
- The Ring weave strands > each weave consists of 2 circular twisting strands; one strand goes on top the Radial Ribs in a sine wave manner and the other does the same but as the inverse of that sine wave. How’s that?
- First, I created all the strands flowing along the same sin wave over and beneath the Radial Beams, basically one curve and duplicating and offsetting it then generating tubes on them. Keep the pivot centered at the shield center.
- You can make all the tubes share one stroke via the paint effects menu >> Curve Utilities >> Attach Brush to Curves.
- Adjust the Global scale – twist angle – Flatness – Brush width on the paint effects stroke attributes to adjust the shape of the strands.
- Then duplicate those tubes (not the curves) with the input graph on in duplicate special and just offset by translating them a bit away from their originals.
- Now if you invert scale vertically the newly duplicated tubes you will have the strands flowing in the opposite sine wave around the Radial Ribs.
- Now, you can manually cut curves and control their CVs to create tears and trims on the edges of the shield.
- You can make all the tubes share one stroke via the paint effects menu >> Curve Utilities >> Attach Brush to Curves.
- First, I created all the strands flowing along the same sin wave over and beneath the Radial Beams, basically one curve and duplicating and offsetting it then generating tubes on them. Keep the pivot centered at the shield center.
Grenades, Belt, Shells
All poly modeled in Maya. This is standard stuff, normal vertex pushing. I’m sort of a topology nerd.
Character Sculpt with Assets in ZBrush
Texturing in Substance Painter
My texturing package of choice is Substance Painter for non-skin elements. It helps a lot in saving time and you can easily see a live preview of the final image.
Substance works great for surface height details which vary in color from the parent surface. This character as mentioned before has multiple UDIMs, and as you can’t paint seamlessly across UDIMs in Painter. From the beginning, you have to choose wisely where to divide your UDIMs. But it’s not that of a hustle.
However, there is one note:
I get that Painter is procedural and hence it’s power, but you have to be really aware not to fall in the generic texture look that many painter users produce. The end results have some sort of a visual imprint that makes it easy to recognize a generic look of a texture and tell, oh that’s Substance Painter work.
To overcome this, let us do one thing – “Paint”:
- Put a paint layer on top of your stack and paint your additional details manually using various brushes and alphas.
- Use stencils, lots of stencils, either to paint color or edit the procedural masks that painter gets you.
Texture Export to V-Ray
I don’t use the presets, I rather prefer to export the maps and their masks separately and reassemble them manually in my shader network in Maya. This gives me maximum control over the look development process without the need to jump back and forth to Substance.
Sculpting with Light
The story is plain and simple, as for presentation I don’t use HDRIs that much.
Start with a simple 3-point light scenario:
- A key light: your high-intensity primary light, it defines your major light vs dark spaces.
- Add a fill light to slightly bring light to the dark side.
- A Rim light from behind the character to separate it from the background.
- Adding more lights to bring more brightness or specular effects to preferred areas.
To reach a successful light scheme, you should be aware that the final look is driven by the following parameters simultaneously:
- Light direction: move the light to define where the shadow falls and how its shape on the object.
- Light size: affects the intensity of the light and the softness of the shadow and the size of specular reflection. Small lights produce sharper highlights which may help define skin detail more.
- Light distance: affects the intensity and size of the shadow and its softness.
- Light intensity
Rendering the Character in V-Ray
Memory management can prove to be a real issue especially for a personal project at home with modest resources.
Due to the number of maps and their sizes, your ram might get overflooded and the rendering process would literally freeze as a result. The solution to this is the implementation of Tiled Tiff (.tx) Mipmapping or Tiled OpenEXR.
This is a process where you convert your textures into a grid of small tiles, so instead of fully loading a 300 MB map to memory, this map is divided into small tiles, each tile would be loaded in memory only on demand in rendering of a portion of the image, and then unloaded after finish, freeing the memory load for other tasks.
You can use nMakeTx tool to Mipmap your textures by converting them into .tx files, it works for V-ray as well.
For more info, explore these articles:
Luckily for Arnold users, in Maya, this is just a tick you place in the file node.
Eliminating White Speckles
During your render, you might encounter very bright dots that are stubborn to eliminate.
- Raising your samples is a start but not a total solution.
- Reducing your max ray intensity setting will help a lot but may not totally resolve all the points and it may cut away from the brightness values.
- Surprisingly I found that the type of Color Mapping method implemented, has a great deal in resolving white points, for me I choose HSV exponential or Intensity exponential as they work better preserving Hue and Saturation. Voila, all white points are gone!
Final Image and Post Effects
I really consider myself lucky to find people like Peter Zoppi, Satoshi Arakawa, Christian Bull and Chris Nichols at CGMA, who have helped me a lot to develop the skill and knowledge that I wanted. Their artistic and technical insight was deep and well-structured. Probably that most precious thing I learned with them is to know how to plan and analysis a project like this, manage my resources, design my workflow and select the right tools to bring my project or task to a finish. Also learning Mari was totally priceless with Chris Nichols. In the end getting comfortable with the wide toolset and techniques that are required today to do a realistic character was a milestone for me.
Creating Captain Blackbeard with Zbrush and V-Ray
Interview with Joaquin Cossio
Joaquin Cossio did a detailed breakdown of his Captain Blackbeard character he created as part of an online workshop.
Hi folks. First, thanks for stopping by! My name is Joaquin Cossio and I’ve been working as a 3D generalist for around 4 years. Recently I’ve been working as a 3D character artist for cinematics and games. I grew up in a little town on the outskirts of Montevideo, Uruguay, a little country located in South America. Once I graduated from high school, I wasn’t sure about my future, but when I discovered the 3D world I decided to start my career at a school called Bios. In my third year at Bios, I got my first professional job as a freelance 3D generalist for several local production houses. Working in a professional pipeline helped me to learn so much more. Over these years I discovered a tool called ZBrush which blew my mind and changed the way I did art. Two years later, I got my first project as a character artist at NIKO Post & Films. I had the opportunity to work in an amazing project for “NBA – Green Energy Team”. There I improved my character creation skills, especially in face modeling. Currently, I work as a freelance character artist for Plus Infinity Studios, a small Indie game company located in Colorado. I got a job there after finishing an online course at CGMA. Thanks to Pete Zoppi who recommended me although there were other amazing artists.
Once I decided to focus all my attention only on character creation, I knew that I had to improve my skills. After searching on the internet for a while, I found this cool course by Pete Zoppi (character creation for film/cinematic) at CGMA. It got my attention right away, not only because he was an amazing artist but also because I thought it was a great platform for studying. My main goal was to use all my knowledge from previous years and use it in a single character. Also, I found this course to be a good way to learn about the pipeline of big cinematic companies.
The first step, and I think is the most important one (at least at the beginning of any project), is to gather references. Usually, I spend one or two days in this stage, depending on the project. Looking for the good ones is a tough task. So, I use and recommend Pinterest because you can save everything in boards and come back to them any time. This way you don’t need to fill your hard drive with junk. After a specific concept or an idea catches my attention, I start to break it down in sections. Making your own references is an important point as well. If you have a chance to gather references from real life, it will be more useful than google research, for sure.
Used Black Sails as the main reference and another 3D references from other artists who inspire me.
Once you are happy with your references, it’s time to get your hands dirty. So, the first step is blockout, where you need to block the whole shape of the character without thinking through details. The idea is to get a good shape to start working with. At this point, we can even use base meshes to speed up the process but be sure to think how it will be used later.
This is probably the most important part of a character, so I spend most of the time working on it. I will try to explain most of the workflow to speed up the process and end up with a very detailed face.
First, I usually start with a base mesh, you can use your own or even download one from the Internet. It’s very important that the mesh contains the main loops and a relative low poly count, so you can work more efficiently (you can download my own base mesh for free on my website).
Second, think mainly on primary forms. I recommend spending more time on this part because it gives you the main shape of the face, also use symmetry at this point to accelerate the process. Try to follow references all the time, use anatomy books as a good source of inspiration.
Third, start by adding some asymmetry to your character, mainly on the points of attention, like eyes, nose, mouth, ears, this will make your character look more natural. What’s very important at this point is using layers. This will allow you to go back in case you want to change something, at the same time, it lets you use morphs to fix parts you want to modify.
Fourth, once you’re happy, you’re ready to sculpt secondary forms but the good thing is that you don’t need to sculpt it by hand because you have texturing xyz displacement maps. So now we need to export our model in a pretty high resolution to the software that you’ll use to project textures. In my case, it’s Mari but you can use Mudbox as well. The advantage of these maps is that they will give you a very detailed result pretty quickly. But I recommend you to check the tutorials from texturing xyz to get a better understanding of how these maps work and how to prepare them before projection.
Once the map is prepared we can import our model and start to project the displacement. I recommend working in the highest possible resolution your PC can handle. I used 4k maps, but it’s better to use resolutions over 8k, this way you can be sure that you maintain pay attention to the smallest details of the map.
Once finished, we go on to exporting the three channels separately. These contain secondary, tertiary and micro details, this way we can apply it as displacement information. In this case, I used ZBrush and as I said before, it’s very important to use layers with levels of detail, as shown in these pictures:
Next, we’ll see the character with all the maps applied and some other extra details that are specific for this character.
Detailing the assets
For clothing, I used Marvelous Designer. Added this amazing tool to my workflow changed it drastically. Now I can start with a pretty solid base and later, in ZBrush, only need to add secondary and tertiary forms. Speaking about that, let me outline a very short tutorial of how I export my mesh from MD to ZBrush. 1. First, select what you want to export. Just to keep it organized, it’s better to do it separately 2. Make sure that your particle distance is lower, around 5 to 10 is ok. 3. Go to file, export, obj (selected), save it 4. Once the export windows show up select multiple objects, “Thin” with “unfold UV cord” active and select “cm (DAZ Studio)” for the scale. Note: the scale depends on the 3D software that you are using, I always use the same setting for 3ds Max.
Once in ZBrush, import your model. As you can see, everything looks really ugly but don’t worry, we’ll fix it in a few steps 1. Make sure that your model has polygroups. If it doesn’t, you can do it very easy using autogroup in the polygroups panel, go to split panel and select “group split” 2. Go to the geometry panel and press zremesher, just leave it everything by default 3. In the same panel go to “EdgeLoop”, panel loops, select 2 in the edge loop slider, thickness 0.01, Polish 0, Bevel 0, Elevation -100 and hit the button Panel Loop. Boom, you have a beautiful shell, now just do a couple of subdivisions and repeat the same process for the rest.
Once it’s done, we are ready for add all the extra details, I used alphas from surface mimic for micro details.
On the other hand, for the Hat, I used a classical workflow. I created a simple model using of polygons and forms in 3ds Max, trying to follow the friendliest topology possible and always thinking that later I’ll apply a smooth modifier. Once our base is completed we proceed to create UVs, add some extra details in ZBrush if needed and export it to Substance Painter or Quixel. In the texturing process, you can play with different color variation, dirt, scratches, etc. These tools can handle a ton of details, so taking advantage of that we can use it to speed up the process. I used the same workflow for the rest of the assets.
When we’re creating a skin shader it’s very important to think on layers. Although new tools let us basically do it automatically, having knowledge of the process lets us understand the concept in a simple way. The challenge is always to achieve a realistic but pleasant aspect. The realistic doesn’t always look nice, for that, I recommend using different photographic illumination techniques, this way we’ll get more interesting results.
Basically, we only need to control the three main values of the material. “Ss density” defines the amount of scatter our model will get from ambient light (low values add more scatter), “sss mix” controls the mix between diffuse and sss material. Values close to 1 are defined as a material with scatter, “sss1 color” defines the scatter color, it changes depending on the skin tones. Playing with these three values we’ll achieve very interesting results.
We can see how the shader is created by applying the diffused color in the corresponding channel and using a variation of reddish skin color in the “ss2 color” slot. Adding more skin layers gives us more control over our final result, but the idea here is making it as simple as possible.
Hair is hands down one of the hardest parts of any character to work on. It’s always been a step that I’d sometimes decide to skip due to lack of knowledge. But in this case, I decided that the hair would be a very important part of the character because it’s a distinctive feature of his image.
I used Ornatrix for 3ds Max, which is not too easy to use, but once you get the hang of it, it becomes simpler. Anyways, this character took more than a week of work until I liked the result and the character was easy to pose. I used layers to design this character and when it came to working on the beard, I created two separate hair systems. One of them was the main form and the other was for “flyaways”, it let me get a more realistic result and was easier to control. In this same way, I created different layers of modifiers that allowed me to control clumping, frizz, curling, details, and shape.
The best way of presenting your character is to create a small story behind it, a reason why your character was created. It shows that the character has its place in space and time, lets people feel interested in our piece of art. It doesn’t have to be too complex, but it has to have a message.
It took me about a month to create this little story for the final render.
Lighting and Rendering
On the other hand, the illumination plays a very important role in the scene, since it defines the composition. In this case, I used a basic three-point technique, one warm main light (key light), one cold (fill light) and a rim light, also cold (back light). It’s important to play around with color values in post-production to draw more attention to the effect.
When it comes to posing a character, I recommend using simple rigging because it lets you get different pose variations without wasting time. I used CAT, a default rig of 3ds Max that’s very powerful and simple. We can use plugins that let you create skeletons automatically. We create the poses by references that show the situation that the character is in.
Without a doubt, I’ve learned many new techniques during the course. I understood many processes that I would skip before because I lacked knowledge. I learned that using high resolution maps lets you work on the smallest details and gives a big boost to your work, at the same time letting achieve incredible results quickly. Also, I learned new ways to create shaders, composition and illuminate a scene properly. I want to point out the incredible attention to details that Pete Zoppi has, thanks to that he makes your work evolve during the course.
I recommend people take this course if they want to make their workflow better and of course learn a ton of new tools.
Realistic Pharah Fanart
Interview with Serguei Krikalev
I’m Serguei Krikalev from Brazil, Rio de Janeiro, and I’m 30 years old. I work as a Character, Texture, and LookDev Artist. My main projects are in the advertisement segment in Brazil at IMGTV, 3Dar from Argentina, Ephere Inc., plus, I love to do personal projects.
Character modeling has always been my passion so I needed to learn something more professional, see how everything works in the environment production and how great studios work in that area. Peter Zoppi is a great instructor and he prepared me for what I needed such as building characters for an efficient deformation, UV mapping and its nuances, texturing, practical rigging for character presentation and others. It was simply sensational.
From the beginning, I wanted to do an appealing character. I started to do researches where I found the Pharah interpretation by the amazing artist Yi Sui and it was love at first sight. Her look caught my attention. I built a simple reference table (RefTable) as shown in the picture below.
In this work, I used a base mesh to gain time and only made the needed adjustments. The concept itself has already nice facial features so I only needed to put my touches based on a mix of other references. In the end, I always make a ZBrush render to see what I need to fix to improve and to finish my block out step.
The eyes construction is relatively simple, I used two geometries: one for iris/pupil and the other for the sclera, just like most artists do.
The main difference is perhaps in the textures and shader process. All my textures are made in Mari, at least for the organic pieces (I’m still integrating Substance into my pipeline) and with a big help of the Texturing XYZ maps I got amazing details. I made my Albedo/SSS color, displacement, and normal maps and exported them to Maya to make the LookDev. An important thing: Renderman is your SSS shader. It’s a bit complex to use at first but sometime later you will figure it out and never stop using it. A little tip: please, consider using the Non-Exponential mode at PxrSurface. For the blending between sclera and cornea, I used the PxrLayer Shader with a ramp to get the desired shape of the cornea. Everything has a simple approach and a lot of try and error.
Hair is always a challenge, but Ornatrix allows me to create it very easily. Actually, I’m an Ornatrix contributor and Beta tester from Ephere. The non-destructive and intuitive system driven by layers allows people with a minimum of knowledge in Photoshop to get awesome results in a very short time. Ephere has many new things under the hood. Worth a try.
Every artwork needs some planning and the hair part is not an exception. I start doing some annotations about the CG Hair I want to bring to life, put down things like frizz, clumps, hair shapes I’ll need and others observations related to it.
With the initial step done, I create scalp geo with UV ready to use as a base for my hair.
After that, I positioned the hair guides to fit the concept.
Here is one of my first render tests after adjusting my guides a bit.
As you can see hair takes time, so references and patience are the key elements here. Generally, I take a few days to finish hair and the results worth the time. Just remember, everything begins with good planning and collecting some references.
In 2018, before I took the course with Peter Zoppi, I finished another class at CGMA, Texturing for Film/Cinematics with Chris Nichols where I learned amazing techniques for texturing. So, when I came to the character course, I had a good background in this area and Peter only pushed up my skill.
My skin textures are made in Mari. In general, my first channel is the fine displacement because with it I can generate maps like cavity, specular, and use it as a channel mask for the albedo texture. I make simple projections for that and do some render tests to find possible problems and fix them.
When I am ok with the fine displacement, I go to my albedo maps using the same approach: projections and render tests. In this case, I did a mix of a simple color map done in ZBrush and my base projection in Mari to see how everything is going.
More adjustments for skin tone and some maps that do not depend on albedo.
This is the part of the project where you need a little bit of patience and observation. After some feedback, I saw that there was a certain need to bring the model a little closer to the original character of the game without losing the features of the concept. So, again, I collected more references for the original Pharah to compare some of the features and obtain the following result.
Marvelous Designer was a big help as always. The workflow is well known: first, we get the mesh in Marvelous, export it to ZBrush or Maya for retopology and transfer the attributes into Maya. The main trick here was in UVs thinking. Keep in mind the directions of the UV. This turns your life easier when handling the textures.
Here is my first pass after the retopo and some render tests for the material visualizer. Take a look at the borders and note that I need some work there.
For texturing this piece, I used Substance Painter. Here are my first texture pass and the render test. I made some annotations for render test, too.
And below is my final piece with the modeling fixed and texturing finished. You can see I inserted some wrinkles made in ZBrush to give a more realistic sense to the clothes.
Renderman is my renderer of choice. In version 22, Renderman was optimized so it is now extremely comfortable to use. Each version is more and more user-friendly.
Generally, all my render tests are made with low quality. This way I can improve some settings to see better what is happening with samples and antialiasing. Most of my shaders are layered and this is really helpful. Some lobes like specular and glossiness can be controlled with more freedom. Of course, good maps like showed above in the skin part help a lot, too. There is no secret in Renderman. Everything is well documented.
The skin is composed basically of three shaders layered by PxrLayeredSurface: one for the skin itself, one for the makeup (this allowed me to control the color and glossiness) and the other is for the tattoo. I always try to keep everything as simple as possible. Simple but effective. In one of the workshops from Disney, I learned: “Work smarter, not harder.”
Finally, here’s my lighting test:
Guys, I hope this was useful for some of you. See you next time!
Feel free to contact me for more details at: