Part 2 of my blendshape tips posts – This is from a very useful tutorial from Jennifer Conley on the proper workflow to create blendshapes using only a portion of the geometry. For example, when your character is made up of just 1 piece of geomety, you might want to separate the head from the body and just create blendshapes for the head only.
Check out the tutorial for a good explanation on the workflow and the reason you have to do it that way.
To summarize the steps:
* Before you start, it would be good to duplicate the geometry and keep one as backup
1) Select the edge loop where you want to separate the geometry, go to Edit Mesh > Detach Components. Then Separate the geometry.
2) Select the head geometry and duplicate (This will be your blendshape target). You can duplicate multiple copies from it, but just remember to keep 1 copy untouched so that you can always duplicate more from it when needed.
3) Select the head, then the body (*NOTE: the order is very important here*) and Combine the geometry. Merges the vertices (where it was separated perviously) and also soften the edge so that it looks the same as before. Then delete history.
4) Select your blendshapes, select the combined geometry, create blendshapes with ‘Check topology‘ off, and deformation order at ‘Front of Chain‘.
And you are done :)
*Remember to check that your blendshapes are working correctly first before you go on to sculpting them. This method is a bit more tricky and things can go wrong easily if you miss a step or two.
Here’s an extremely useful tutorial from Steven Roselle on how to go about baking topology or geometry changes to all your blendshape targets. This is useful when you already have all your blendshapes created and set up, but realised you need to make some changes to the base model.
In summary, topology changes refers to when you need to add edge loops or delete edges etc. After you have made the changes to the base model, select the base model, go to Edit Deformers > Blendshape > Bake Topology to Targets.
Geometry changes refers to when you need to pull and push vertices to shape the geometry differently. After you have made the changes to the base model, select a vertice, perform a transform component action (hold shift, right-click, and you will see transform component in the drop-down menu). You don’t have to move the vertice. This is just to create a ‘Poly Move Vertex‘ node in the inputs, which tricks the software into thinking that there are topology changes. Right click on model, go to All Inputs, and re-order the inputs to this order:
Once you are done, again go to Edit Deformers > Blendshape > Bake Topology to Targets.
If you need to make both topology and geometry changes, you can perform all the changes together, and just follow the second method (for geo changes). You might end up with a lot more nodes in the inputs, but just make sure ‘Blend Shape’ is at the bottom, with ‘Tweak’ above it, and ‘Poly Move Vertex’ is at the top.
Just a quick one. If your normal maps appear red instead of the usual purplish blue in Unity, go to the Build Settings > Player Settings for PC, Mac, Linux > uncheck ‘use Direct3D 11‘. That should fix the issue.
I have been trying my hands a little in unity, mainly to check if my FBX animation exported from maya reads properly in unity. Once in a while I will have what looks like an issue of double transformations in the scaling of my geometry mesh, particularly for areas skinned to joints that have IK spline applied (for e.g. when using stretchy spline IK for the spine, the body stretches or compresses twice or more as compared to that in maya).
A check in unity’s console reveals a FBX import warning that “Scale Compensation is not supported by Unity”, and that “might result in scale imported incorrectly”, with a list of the affected joints. The solution is to “try disabling Scale Compensation in your file”.
To do that in Maya, select the affected joint(s), go to the attributes editor, uncheck ‘Segment Scale Compensate’. That actually recreates in Maya the double scaling problem I had in unity, and to solve that I had to modify my stretchy spline IK expression.
So instead of the usual expression like this:
float $spineScale = curveInfo1.arcLength/10.607;
jnt_spine0.scaleX = jnt_spine1.scaleX = jnt_spine2.scaleX = jnt_spine3.scaleX = jnt_spine4.scaleX = jnt_spine5.scaleX = jnt_spine6.scaleX = $spineScale;
I just need this:
float $spineScale = curveInfo1.arcLength/10.607;
jnt_spine0.scaleX = $spineScale;
This is because ‘Segment Scale Compensate’ (checked by default) tells the joints down a chain to ignore the scale changes of their parent joint(s). When it is unchecked, scaling the top joint in the chain applies the same scaling change to each joint down the chain, even though you don’t see the change in the scale value in the channel box. This is how unity reads the scaling of the joints, which is why my geometry ended up scaling a whole lot more when using the first expression (The scale changes compounded down the chain of joints).
I hope I’m making sense here haha.
I have been having some issues when using Maya’s Camera & Aim with Vray Physical Camera enabled. It is like the exposure of my render changes as my camera goes nearer or further from the target. The problem is not always obvious so sometimes I would just animate the intensity of the lights to compensate for the change. I only really notice it recently when working with a huge scene; The exposure difference was so huge that it just didn’t make sense.
After some digging around it seems like it’s the camera’s ‘Center of Interest‘ that is causing the problem. When you are using Camera & Aim, the center of interest changes with the camera’s and camera aim’s position. I think this is connected to the focal length of the physical camera which affects the exposure (I’m not sure how and why though). So in the end I just break connections for the center of interest and that seems to solve the problem (it doesn’t look like it affects my animation).
I saw a post on cgsociety mentioning that enabling ‘Specify focus’ under the physical camera settings also helps. So that’s an alternative should the above method gives you problems.
I can’t believe I have been missing out on this useful technique all this while.
This is most useful when you are using a Layered Texture (e.g. to plug into the color node of your MR / Vray shader), and would like the texture file nodes of each layer to be affected by a different UV layout.
Here’s a tutorial by Parker Davidson on Lesterbanks (a place for great 3D resources!) that gives a good explanation of how to use mutiple UV sets together with the layered texture.
In summary, to create mulitple UV sets for a single object,
– Go to the UV texture editor, under the Polygons menu > Copy UVs to UV set > Copy into New UV set (or just create an empty UV set)
– Name your UV sets, and you can switch among them under the UV Sets menu
– You can freely edit the UVs of each set or reapply certain UV mappings without affecting the other sets.
To link a texture file node to a specific UV set:
– Go to Relationship Editor > UV Linking > Texture-Centric
– Select the file node on the left, select the corresponding UV set you would like to use on the right
How is this useful?
Imagine that you are adding in details to a just a small corner of your object. Your texture resolution has to be really big in order for the details to be sharp. However, if you use a layered texture to add in the details to your base color / texture, by using multiple UV sets, you can scale up one of them without affecting the rest and thus use a much smaller texture resolution. You can now also have the freedom to move the details around by shifting the UVs in its UV set.
Cinema 4D’s mograph tools are really powerful, and I was looking at its rigid body simulation capabilities (using the mograph tools) which seems like a very good alternative to Maya’s native rigid body dynamics, which can be slow to calculate or produce some crazy results when you are trying to simulate a large group of objects at one go.
Thus I was researching on ways to bring C4D’s mograph animation into Maya, and I was pleasantly surprised to find that it can be done really easily with the new alembic format support in C4D R14. All you need to do is to export your mograph cloner in the Alembic format in C4D, and then from Maya’s Animation menu, go to Pipeline Cache > Alembic Cache > Import Alemic, select your file and you are done!
If you wish to have further control in Maya, such as the ability to modify the keyframes of each cloned object (for effects such as bullet time or simply to move or hide a particular clone), one method will be to bake the mograph animation in C4D first before exporting it in the Alembic format.
Here’s a useful tutorial I’ve found on how to bake your mograph animation in C4D: http://www.youtube.com/watch?gl=SG&hl=en-GB&v=b-CnSGjkp68. It also shows the process of setting up a rigid body fracture animation using mograph, then baking the animation so that you will have editable keyframes for each individual cloner.
After you import the Alembic file into Maya, bake animation for all the objects and you will have editable keyframes in Maya too!
This method of baking animation in C4D is more of a work-around and thus it does seem a little inefficient. I would be happy if anyone can share a better method to do so :)