Studio 3 Development Blog 12

Today I just want to cover what I now want to focus on after completing studio 3.

In studio 2 I focused on organic sculpting techniques and began the transition to PBR shaders. In studio 3 I focused on hard surface modelling, lighting and continued to increase my knowledge of PBR shaders.

Now that I have moved on from studio I want to start pursuing what I am really interested in, which is character and creature creation. Over studio 2 and 3 I made sure that I was choosing topics that I could learn that I could one day consolidate and put to use later down the track.

Later down the track is now!

Moving forward I want to really focus on honing my organic and hard surface modelling techniques, particularly with regard to sculpting characters and creatures.

As a key part of homogenising the organic and hard surface pipelines I want to make the transition from Autodesk’s Mudbox to Pixel Logic’s Z-Brush as it is the industry standard sculpting program and it has much more powerful tools for hard surface sculpting.

To help progress my knowledge in these areas and as I find the initial learning of Z-Brush to have a very steep learning curve (due to a very un-intuitive UI) I am utilising self directed learning techniques including the use of as many tutorials as I can find on the topic.

Once I have decoded how to use Z-Brush effectively the next step is just practice.

Below are samples of the tutorials I have been using.

So to wrap up, the key focus for where I want to go now with what I have learned in studio 3 and prior studies combined, is transitioning from mudbox to Z-brush and incorporating both organic and hard surface techniques to create appealing characters and creatures for use within either animation or games. Applying these techniques and combining them in a way that will really help me to bring up the overall level of polish and appeal my work has.

Thank you and till next time,

James Day – 1002467

Studio 3 Development Blog 11

Film Shot in Relation to my Final Product

The shot I chose as inspiration for my world builders project shot was the one below from a cut scene in Diablo 3.

Film Deconstruction Scene

Below is a frame from the shot I produced as part of my world builders project.

James_v2.0045.png

While the atmosphere of the shots is quite different I tried to give somewhat of a similar feel with the choice of camera angle and focal points.

The key similarity I tried to work on was the camera angle. The low angle looking up at the focal point helps to give the focal point a sense of grandeur and power. While in the reference shot the angle is quite acute, really emphasising these qualities, portraying the might and opulence of heaven I used a less extreme angle with the intention of underplaying the superiority of the race.

I did this because, while in the Diablo universe angels tend to be portrayed as more powerful than their demonic counterparts (when talking average angel against average demon) in the Darksiders novel that my world builders project is based off they are simply another race. Their architecture is grand and their doctrines are law and order, they are not inherently more divine than any other race in the universe and not nearly the most potent.

With this in mind I played down the majesty gained from the camera angel and built there sense of grandeur into the building materials and architecture. This was a collaboration between the reference image and the rich textual description given in the novel.

I thought that incorporating elements of a more binary portrayal of heaven into one that underplays its sense of divinity would help to anchor it into what the target market would already believe heaven would be like and so would help it be more believable for the audience.

Moreover, the reference shot has both  a primary and secondary focal point, primary being the gates and secondary being the character in the foreground, however if you combine the character in the foreground and the pillar in the middle ground you can see that they form framing elements helping to draw your eye to the primary focus.

In my world builders shot I tried to draw inspiration from this, not by adding a secondary focal point but by starkly framing the main focal point with statuary.

A little more about world builders and what could be improved on

In the post mortem blog post I did regarding the world builders project I highlighted some things that did not go as well as they could have. Here I would like to reiterate some of those points with the benefit of more hindsight and in a more analytical fashion than I did in the post mortem, this time with the bigger picture and solutions at the forefront of the discussion rather than the problems themselves.

I think the biggest breakdown in the production process of our project was communication. While we did have some face to face time and had other communication tools at our disposal, which did get used, I don’t think it was as much of a priority during the production phase of the project as it should have been.

As leader of the group I think this comes down to not setting it up to be as important as it needed to be. Our group structure also didn’t help in the situation either, deciding to take on several environments was ambitious but it broke our group into smaller groups that then had less need to communicate their progress and struggles with the rest of the team and just did so within their smaller groups.

To help counter this in future I think that having more shared assets or having everyone work on part of each of the environments would have helped prevent this breakdown in communication. Keeping everyone involved in the entire production might have helped to promote a more productive and communicative environment rather than splitting our talents and resources.

In the case that a split team was necessary, perhaps weekly progress updates and meeting that thoroughly explored how each member of the team was performing and how they were coping with their workload would have helped keep unity within the team.

Implementing such meeting might have also helped divert one of the biggest troubles our group had and so might have vastly improved the quality of our deliverables.

A more production specific example of something that could have been improved within the project was the creation of the angel structures. When creating these assets I tried to incorporate a modular approach as I had very little time to complete them. I created wall panels, corner pieces, stairs and a roof piece that could fit together and create some variety within the cityscape.

While I think that a modular approach was the right approach for the task, I do not have a great deal of experience with modular workflows and so I think that halving the time spent on the assets to put some time into research and development of my skills before beginning the task would have been a wise use of my time.

With the benefit of hindsight, I would have liked to have created more smaller pieces that could have been arranged in different ways to give a more varied and dynamic look to the environment than the very flat-packed feel that I think it ended up with.

Key take away point on this is definitely to incorporate more R&D into the production pipeline to help ensure that my skill set is up to the task and that will also ensure that the time I spend producing assets is efficient and the assets are up to standard.

I know it has been a long one but thank you for trudging through it,

As always, till next time,

James Day – 1002467

Studio 3 Development Blog 10

Welcome back to another installment of my development series. Today we are going to look at some research into Hard Surface Modelling and Topology.

I want to combine the two together because topology is one of those things that changes depending on what you are doing with the model. If it has to deform than the mesh has to be different than if it is a static object. So combing it with a specific purpose will help to refine the topic for us.

The first thing to remember is to keep in mind the overall goal of the model. Is it going to be used in a first person game? 3rd person? film? is it something that the audience is going to be focusing on or is it something that will only be seen in passing for a fraction of a second?

This is important because it affects what you need to do with the model. If it is for a first person game than it needs to be functional, detailed on all areas of the model that will be seen the majority of the time, the UV space can be adjusted based on how far away from the camera it is to ensure a similar LOD on the model. If it is third person you can leave the LOD the same over the entire model but it still needs to function and so needs all the appropriate pieces to be separate and set up in a way that they can be effectively animated. If its a showpiece that doesn’t need to be animated you can forgo some details and some parts so long as it looks the way it should.

The same sort of principles apply to topology. If the surface doesn’t need to deform than its topology is much less important than you would first think. With no deformation you don’t have to be as careful with quads/triangles/n-gons provided your model is able to function as intended and looks the way you want it to than it is working as intended.

Organic modelling doesn’t have this freedom but freedom always comes with a cost. The cost is that you have to pay much more attention to some of the smaller things that can sometimes get overlooked, such as smoothing groups, supporting edges and intersections of materials and parts.

This is not to say that smoothing groups, supporting edges and the intersection of materials and parts is less important in hard surface modelling. Sometimes it can be the difference between a successful model and and unsuccessful one, a good model and a great model.

InitialInitial_close

Here you can see the original model and level of detail I intended for this model. The topology is a bit off and there are some errors in the mesh but on a whole, its not too bad. Without the errors in the mesh it would have been serviceable for it’s intended purpose.

However with the errors that it had, I had to rebuild the mesh. Naturally, this made me search for more efficient methods than traditional retopology tools, which brought me to the z-brush dynamesh workflow.

Not as successful for me as it could have been but an eye-opening and horizon broadening experience. Here are the results I got from the process.

Retop_experimentRetop_experiment_close

From a distance it looks alright. But once you get close you can see where the errors in my execution of the methodology have let me down. With some more research and practice with the operations much cleaner and sharper topology can be achieved as well as a more efficient mesh.

I have seen excellent results with this method and so it is something that I have decided to put aside for further research and practice till a later date.

So for this project I decided the best way to fix the problem, with the help of hindsight, was that I should rebuild the mesh without the errors and in a more animation friendly form so that if I needed to demonstrate the usefulness of the model, it could be done. Resulting in this.

Final

Not to say this is a perfect model. It is functional and nearly error free, however I just do not currently possess the knowledge of gun anatomy to produce a truly functional and ideally constructed model of one.

I achieved a visually pleasing result and it was able to perform its intended function.

Utilising the Quixel suite and 3DS Max 2017 for the final render.

Scene_Good_Maps_26DB_1HR

The materials need a little polishing but overall I am happy with the result.

Hopefully this can be a useful starting point for further research.

As always, Thank you and till next time.

James Day – 1002467

References and Resources

Game Assets – ProBoolean Dynamesh Workflow – Part 1. (2016). YouTube. Retrieved 8 August 2016, from https://www.youtube.com/watch?v=mj3qPPmk16M

Grenade Tutorial – Part 1 – Modeling & UV Layout – 3Ds Max 2016. (2016). YouTube. Retrieved 8 August 2016, from https://www.youtube.com/watch?v=Y83FLL6TqF0

Gun modeling for FPP games. (2000). Piratportfolio.com. Retrieved 8 August 2016, from http://piratportfolio.com/fpp_eng/

Proboolean + Dynamesh hardsurface workflow tutorial. (2011). polycount. Retrieved 8 August 2016, from http://polycount.com/discussion/168610/proboolean-dynamesh-hardsurface-workflow-tutorial

Tor Frick. (2016). YouTube. Retrieved 8 August 2016, from https://www.youtube.com/user/Askguden

Studio 3 Development Blog 9

Welcome back to another instalment of my studio development blog. Today is going to be a big one, we are going to go through a post mortem dissection of the World Builders Project that I have just completed.

Today we are going to scour every aspect of the project from inception to completion so that we can isolate what went well and what did not and how we can best learn from the project as a whole.

Forewarning, it’s  going to be a long one.

A brief, brief of the project. The goal, in teams, was to create individual shots of an environment built based on the descriptions within a book.

Our group, after much deliberation settled on the book, Darksiders, the Abomination Vault by Ari Marmel. We chose to create several environments from the book with the intention of telling a little of the story through our sequence of shots.

This is our final product.

First, I want to take a look at how our team and myself did in regards to teamwork, engagement, processes and the outcome of the project.

Starting with teamwork and engagement as they sort of go hand in hand. As a team, on the most part, I think we worked well together. I was the project leader as the book we chose was my suggestion. This isn’t a role I have assumed from day dot before, so there were a few teething issues there, nothing serious though. I did the majority of the documentation including the entire project plan, which was thorough and included an easy to follow schedule, that didn’t get followed and clear milestones that were not met very often.

I think this was one of the key problems our team had, was not following the project plan strictly. In it were laid out deadlines that if met would have made our end result much nicer, especially because of the extra time at the end of the project we had to work on it further.

This was in part because I struggle with exerting myself, even when I have the authority to do so. Something that I think would have helped greatly, if I were more strict and stern in my leadership roll.

Acknowledging my failure as a leader in that respect, the team also didn’t take the initiative on that front to make sure they were self motivated enough to follow the plan without immediate and constant guidance to do so.

I think in future projects, where I am leader I will be more strict in making sure that the project plan gets followed and milestones are met. This way there will be no problems in ensuring that the project gets delivered to the required quality on time and without fail. More over, in the event of an extension then the team is able to create something even better again.

This leads me on to group engagement, an area our team had serious issues. On our team there were six people. We decided to split into smaller groups for each environment. It worked well for getting a variety of areas done with a variety of subject matter and it gave each member the opportunity to work on something  a little closer to what they wanted to to. This was done, in part, to help give the team more ownership of their work, to help motivate them to actually want to do the work rather than just needing to.

This was both a good and a bad thing. It worked to an extent but without everyone working on the same thing it became much less necessary to keep everyone appraised of where we were up to and so communication suffered and also there were less overt opportunities to ensure everyone was staying on track with their workload.

In one part of the group, there was a particular member who did not contribute a single asset to their environment and only did one quick piece of concept art for their shot, as it was required for the class and still came in once to collect the finished project files so they could do their required shot without having to do any of the work to achieve it. This shot then didn’t even get made available to the group to be made part of the sequence until an hour and a half before the showing of said sequence.

This was a huge blow to the project. Not only was the project then missing an important establishing shot for the world but also the assets that they had 2 -3 weeks to produce to the required quality were instead rushed in 2 days when we figured out they were not going to get done unless we did them ourselves.

While I took steps to make sure everyone did what they had to do, without taking decisive action earlier, out team lost weeks worth of time in which the assets could have been redistributed. This was again in part my problem as leader as I didn’t want to hinder a classmates chance of passing the course because they were struggling. However when it became apparent that they were just not engaging with the project I approached my tutors and made the decision to redistribute their assets.

Although the substitute assets were not to the highest quality due to time constraints, they at least got done allowing 2 other members to complete their work.

With regards to the processes undertaken to complete the project, I think our team did quite well. We had an effective pre-production phase that included several pieces of concept art, thorough research on ideas and the result we wanted to achieve and a very in-depth project plan which although not followed to the letter gave us a very good starting point for what work needed to be done when.

This was adapted over the course of the project as circumstances changed and the deliverable date was moved.

All in all I think the processes our group used were effective and helped us to achieve the outcome that we did.

Although it is not quite to the standard that I would have liked, I think our final product was quite good. It could have been better if some of the aforementioned circumstances did not arise or were handled better when they did. But all in all, I am satisfied with the result.

Our final deliverable had an output format of 1920 x 1080 resolution, H.264 compression video. Each member produced one of the shots in the video and I compiled them all and did the editing, meeting the brief’s requirements.

We posted our video to youtube as our chosen publishing platform based on the research that went into our target market as part of the project planning document.

In terms of the workflow we used to complete the project. We started with pre-production. We decided on our idea, researched and concepted ideas for our shots and our environments. From here we moved to producing basic assets to set up a dummy scene or pre-visualisation. After this was complete we began production on the final assets that were swapped into the previs scene as they were completed until the whole scene was finished. Asset production followed a basic PBR pipeline. After the scene was completed, each member rendered out their shot before all 5 were complied into a single sequence and exported in the appropriate format.

All in all the workflow that we used was successful in terms of the completion of the project in a timely and efficient manner.

A list of the tasks that I completed as part of the World Builders Project.

Project Plan Document and all accompanying research, parts of the art bible and style guide, contributed to the mood board for the entire project, did several pieces of concept are and development for the environment I was involved in and my shot, completed the film shot deconstruction, created the pre-vis scene for the white city environment, created the gates, walls, bridge, towers, modular buildings and accessories for the white city scene, textured and implemented said assets. Complied the white city (including other peoples assets) into the final scene, did the lighting and rendered out initial versions of my shot and another’s shot. Did 3 iterations of my initial shot (gave a better result but upon viewing the final result could have done better). Compiled all shots into one sequence, did transitions, sourced music, created the 3D version of the title card.

Also as part of my duties as team leader I liaised with the facilitators (clients) regarding the team’s progress and any issues that we encountered, maintained updated versions of documentation and made sure I was available as much as I could be to offer my team what assistance I could.

As I was leader for this project, I think it is only appropriate that I critically evaluate how I think I went in this roll.

Previously to this the only time that I was in any sort of leadership roll on an animation project was in studio 2 where I took over when the previous leader stepped down. In that instance the framework for the project was set up already, all I had to do was make sure what needed to be done was getting done and to help the team achieve its goals.

This time around, I was leader from the inception of the project. While I am not sure that my personality suits that of a leader I am certainly able to complete the tasks required of a leader, documentation and setting up the project frameworks and getting the ball rolling. Where I have noticed that I fail as a leader is that I am not assertive enough to really tell people what needs to be done in a way that they understand that it NEEDS to be done and not just has to be done when they feel like it.

This is something that I have noticed in other areas of my life and so is something that I am working on improving. I think that taking on leadership roles is a great way to better myself in this area and so improve my skills as a leader.

The other area, that is linked to this, is that I am not strict enough with making sure my team meets their deadlines. I have a tendency to try to avoid conflict and so I can be a little outspoken and my words don’t carry the authority they need to get the idea across. Also something that I am working on that will come with more practice and life experience.

All in all, I dont think that I failed as a leader in this project, but I do think that there is room for significant improvement in this area for me.

I know it has been a long one.

Thank you for sticking it out,

As always,

Thank you and till next time.

James Day – 1002467

Studio 3 Development Blog 8

Welcome back,

Today we are looking into lighting in a 3D computer generated scene. Basically for anything produced in a 3D modelling package.

I decided to look into lighting as part of my specialisation project as I feel like it is an area that I need to focus on to improve the overall quality of my work and the presentation of it, as that is an area that I feel I am lacking in knowledge and experience.

Lighting is both as simple as it sounds and far, far more complicated than it seems at first. On the surface, lighting refers to just that, how you, as the creator, light the scene. Light being what determines how all of the object look and what can be seen, even more so now that PBR is industry standard.

Lighting is far more than just making things visible. Lighting also affects the mood of a scene or shot and also what information is conveyed to the viewer. For instance, just by changing the lighting, you could change the mood of a scene from serious and intense to dangerous just from changing the lighting from a variant of 3 point lighting to silhouetted lighting.

So what is 3 Point lighting?

3 Point

This is the standard setup for 3 Point lighting.

The Key Light is the main source of light for the subject. It tends to be a white light and the key source of shadow.

The Fill Light is a dimmer light that tends to be warm in colour and shouldn’t overpower the key light. Its purpose is to fill in some of the shadows cast by the key light and thus by showing more information about the subject, gives more interest to the subject.

The Back Light tends to be cool in colour and is sometimes referred to as a Rim Light as its job is to create a sort of silhouette of light around the edge of the subject. This helps to more clearly separate the subject of focus from the background and give more depth to the scene.

3 Point lighting is the most common industry used lighting setup.

Clearly however 3 Point lighting is an artificial lighting setup. What if you want to create a realistically lit scene?

Well absolutely, go for it, see how it goes, you might like it. And that is always my strategy to begin with. I think getting a scenes light’s setup based on the light sources within the scene is always a good place to start before you start looking at what needs to be enhanced.

Once the scene based lighting is in place you have a good idea of what is conveyed from the scene. Let’s have a look at a quick example that I setup for this purpose using a couple base meshes from mudbox and a few simple object I created.

Scene_Lights.png

This is a scene, it is lit by a skylight (simulating the moon casting light over everything) and by the street lamps. There are a few errors that I would correct if this were a production (The lamp light would be an omni light set into the lamp which would then reflect the light back down, however I cheated and just used a downward facing spotlight, it did the job I required of it, and there is no shadow under the car, an oversight in this render).

As you can see, there are 2 object that are intended to be seen and interesting, the car and the t-rex. As it stands while due to contrast in materials you can see them both quite clearly, only the car seems to be of relevance, and that is because it is lit.

Here is another pass of lighting where I added 3 Point lighting to help accentuate the car even further.

Scene_Lights_3P_MS.png

Here it seems to be much more the focus of the shot. The fill light is a little intense for my liking but you can see how much more interest it adds to the object. When adding 3 Point lighting to help a particular subject, I think it is always important to ensure it has as little impact on anything other than the subject in question. If it affects the environment it is going to look artificial rather than just enhancing what is already there.

Next we add a few more enhancing lights to help get across the point of the shot.

Scene_Lights_3P_MSnSS.png

Now both subject can be easily distinguished from the background and we, the viewers, know what we are suppose to be looking at.

So much of the animation industry is about making sure a scene clearly conveys the information in it. Everything from the design of the objects, the composition of the scene, the timing and positioning of animation and lighting are all used in conjunction to convey what is happening and where the viewer should be looking.

By use of lighting and composition these frames show clearly that the audience should look at the car and the t-rex, the rest is just the setting, the two focal objects are where all the action is going to take place.

I look forward to seeing how I can use lighting more dynamically in my scenes to help make it clearer, and even just in the presentation of models, such as this one.

Cassul_13Hours.png

APA referencing to follow.

Thank you and as always,

Till next time,

James Day – 1002467

References

4 Basic Lighting Setups. (2014). Improve Photography. Retrieved 15 July 2016, from http://improvephotography.com/flash-photography-basics-9/

24 things you need to know about lighting | 3D Artist – Animation, Models, Inspiration & Advice | 3DArtist Magazine. (2016). 3dartistonline.com. Retrieved 15 July 2016, from http://www.3dartistonline.com/news/2015/04/24-things-you-need-to-know-about-lighting/

Applying 3-Point Lighting. (2013). Videomaker.com. Retrieved 15 July 2016, from https://www.videomaker.com/article/c13/12230-applying-3-point-lighting

CG Lighting Tutorial: 10 Tips. (2016). Animation Mentor Blog. Retrieved 15 July 2016, from http://blog.animationmentor.com/cg-lighting-tutorial-10-tips/

Explainer: film lighting. (2014). The Conversation. Retrieved 15 July 2016, from http://theconversation.com/explainer-film-lighting-30658

How To Set Up 3-Point Lighting for Film, Video and Photography. (2016). YouTube. Retrieved 15 July 2016, from https://www.youtube.com/watch?v=w3xYPOiPtE4

Issues with Large 3D Animation Scenes Light and Replication (Tutorial Part 1). (2016). YouTube. Retrieved 15 July 2016, from https://www.youtube.com/watch?v=i93y-MFu05c

Light Source: In the Mood? Creating Mood with Light. (2016). Videomaker.com. Retrieved 15 July 2016, from https://www.videomaker.com/article/c13/7980-light-source-in-the-mood-creating-mood-with-light

Light Source: Lighting for Mood. (2016). Videomaker.com. Retrieved 15 July 2016, from https://www.videomaker.com/article/c13/10216-light-source-lighting-for-mood

Lighting Styles. (2016). Facweb.cs.depaul.edu. Retrieved 15 July 2016, from http://facweb.cs.depaul.edu/sgrais/lighting_styles.htm

Three Point Lighting Tutorial. (2016). 3drender.com. Retrieved 15 July 2016, from http://www.3drender.com/light/3point.html

Three Point Lighting Tutorial for 3d Animators. (2016). YouTube. Retrieved 15 July 2016, from https://www.youtube.com/watch?v=lWwt9f8WMAU

Studio 3 Development Blog 7

Welcome back,

Today we are going to have quite an in-depth look at a specific area of PBR materials. I started looking into PBR more closely when 3DS Max 2017 included a physical shader in it’s material editor. It’s physical material however is much different to that of UE4 and thus much different to what I am use to. But nevertheless I thought it would be a great tool for doing beauty renders and presenting my models rather than resorting to what can be done in real time engines.

As what I am focusing on this trimester is suited to pre-rendering rather than real time I thought looking into the physical shader in Max 2017 would be a wise idea. I created a model and followed the general PBR pipeline that I use. Got the standard PBR maps that I use (albedo, roughness, metalness and normal) a standard base for any material, plugged them into their spots on the physical shader and… it looked like plastic…

So I did some research. As with all software, the more powerful it is the  more you need to know to get it to work right. The main contributing factor to the lack of success of my shader was the IOR value (Index of Refraction). It basically controls the refractive nature of the shader and subsequently the object that shader is applied to.

In theory it’s a pretty simple concept, the index is a relative value, comparing the speed of light as it travels through a medium to the speed of light and the speed it travels through the medium of the receiver (human eye or camera). The index value tends to indicate density of the object, that is to say air has a IOR value very close to 1, its 1.0003 to be specific, water is 1.333 and glass is around 1.6 ish. These values can be easily found by looking up an IOR table.

Thinking about it a little further if you wanted a super realistic scene you could take into account lots of other physical parameters that would affect the IOR value of an object. Given that IOR is connected to density then you could adjust the IOR value accordingly. Also worth noting the other dependencies of the density of a material such as the pressure or forces it is under and the temperature of the substance (both particularly effective on gasses and other compressible substances). For example, logically, the higher the altitude of the scene the closer the IOR value would be to 1 for air as the density of the air lowers due to the lowing of atmospheric pressure, or the deeper in the ocean you travel the higher the pressure, and thus density of water (although it compresses much less than a lot of other substances) and so the IOR value would increase, although only marginally.

Through some research I found that with little more than and IOR value and some base colours you could can some fairly good basic materials. This suggested to me that the IOR value was super powerful and super important. But at the end of the day it was just a value for the whole material.

What about objects that are made of several sub-materials?

Surely they wouldn’t pigeon hole you into using multi-sub-object materials for every model?

Turns out you can control the IOR values for separate parts of the model through the use of an IOR map. Being only a value it logically follows that a grey scale map works to control the IOR value of the material.

3DS Max’s documentation says that when using a map to control IOR the material always interpolates the IOR value between 1 and the value set by the material.

In the physical shader the IOR values you can enter range from 0.1 to 50. Regardless of what you enter the values never go higher or lower than that range. Additionally as the means of control through a map is grey scale then the RGB values for the corresponding IOR are also bounded by the 0-255 range that RGB is bounded by. Also, we know that the map is an interpolation between the IOR value you set in the material and 1. Assuming a linear interpolation, logically then, it follows that it shouldn’t be too hard to formulate a way to predict what the set values of IOR and the RGB values in the map will produce as an active IOR value.

A linear graph tends to have the form y=ax+b where ‘a’ and ‘b’ are constants. In our case y (what we want) is the active IOR value, x is our RGB value. We can safely say that our constant ‘b’ is 1 as that is the minimum value for IOR when the set value is above 1 and is the maximum value for the IOR when the set value is between 0 and 1. So currently we are looking at something like IORactual= a * RGB +1. As the actual IOR value we want is based on the set IOR value of the material than we know that our constant ‘a’ must be that same scalar.

Of course we have neglected that the RGB value there is also a relative RGB value and so should be actualRGB/maximumRGB or RGB/255.

So from all of that we get that the IOR value we want should be equal to SetIOR*(RGB/255)+1.To compensate for the fact that IORactual cannot exceed IORset we can modify the equation to be IOR=setIOR*(RGB/255)+(1-(RGB/255))

This allows us to maintain the limits of 1 and set IOR.

Some simple tests. Assuming setIOR=10

White RGB is 255

IORactual= 10*(255/255)+(1-(255/255)) = 10

Black is 0

IORactual= 10*(0/255)+(1-(0/255)) = 1

So the extremes hold true.

Unfortunately this is all speculation without testing.

So I built an IOR grid out of spheres with lots and lots of materials.

IOR_Grid

On the left going down is the base scale. at the top down is IOR=50/25/10/5/2/1

The grid on the right from left to right is using RGB values of 255/192/128/64/32/0

As you can see the RGB of 255 is the same as its set IOR counterpart and the IOR of 1 is the same all the way along its corresponding RGB values. Also where the RGB is 0 the IOR also appears to be 1.

So far as I can tell the formula seems to hold true. But what I have noticed and may need further testing is that adjusting the IOR value with a map alone does not seem to do as much as I would like (could just be a rendering error on my part, missed a setting somewhere or something like that). I found that I needed to work with both the reflective map and IOR map to achieve results.

So this is what I have learned and have derived from the research I have done into IOR maps in 3DS max 2017.

Hope it was helpful or insightful, references to follow in APA 6th edition.

As always,

Thank you and till next time,

James Day – 1002467

References

{{ meta.title }}. (2016). Area by Autodesk. Retrieved 9 July 2016, from https://area.autodesk.com/blogs/the-3ds-max-blog/introducing-3ds-max-2017

3ds Max 2014 tutorial – V-Ray material IOR maps (and color map experimenting in general). (2016).YouTube. Retrieved 9 July 2016, from https://www.youtube.com/watch?v=k93ECl5bPZQ

Find answers to all your CG Questions anc catchup on the LATEST CG News, EXCLUSIVE Features and Images from movies, games and art. (2008). CGSociety Forums. Retrieved 9 July 2016, from http://forums.cgsociety.org/archive/index.php?t-662851.html

Find answers to all your CG Questions anc catchup on the LATEST CG News, EXCLUSIVE Features and Images from movies, games and art. (2007). CGSociety Forums. Retrieved 9 July 2016, from http://forums.cgsociety.org/archive/index.php?t-513458.html

Material IOR Value reference. (2016). Blenderartists.org. Retrieved 9 July 2016, from https://blenderartists.org/forum/showthread.php?71202-Material-IOR-Value-reference

Pixel and Poly – Design Focused Creative Services. (2016). Pixelandpoly.com. Retrieved 9 July 2016, from http://www.pixelandpoly.com/ior.html

Refraction Map | 3ds Max | Autodesk Knowledge Network. (2016). Knowledge.autodesk.com. Retrieved 9 July 2016, from https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2017/ENU/3DSMax/files/GUID-CCD9B76C-9AC6-46E6-8B9C-E367CFC0FDAF-htm.html

To Use ActiveShade Rendering | 3ds Max | Autodesk Knowledge Network. (2016).Knowledge.autodesk.com. Retrieved 9 July 2016, from https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2016/ENU/3DSMax/files/GUID-06FF191B-B740-40C8-BD6A-CE07AF380304-htm.html

 

Studio 3 Development Blog 6

Hello, hello,

Today I would like to talk about the cross discipline project that I have been working on recently. A little while ago I jumped on board a Stuido 1 Game’s team who were creating a board game. There game is called Renegade, the idea is that the players are to move through the board, it is a 3 player game where one is a renegade and the other two are attempting to catch him. If the renegade reaches the end first he wins, if either of the other two players finishes first they win. There are various game mechanics to help balance the two teams in terms of which team wins at what rate.

They required me to create a model of a ship to be used as a playing piece. They gave me some reference images and general goals for the look of the ship. I was not provided with a size for the ship or for any restrictions on poly count or texture map sizes. Their intention was to have the ship 3D printed. From my experience with the process (a friend of mine has one and has some genius plans for it’s use) I know that they have a limited printing area and so a large model would be unsuitable. Further more it is a playing piece so something similar to a monopoly piece was in order.

Based on the reference image and descriptions I produced the following model.

Ship_1_Snip

A little later I found out one of the other animators withdrew and so the team needed someone to create another ship for them. Based on a different concept image and due to time constraints I took the liberty to modify the above model to something more similar to the 3rd design they had in mind and this is the result.

Ship_2_Snip

Something much more decayed and broken looking.

So far after turnover I have heard no ill words from the team regarding the work and thus am lead to believe that I have succeeded in completing their task.

Thank you and as always,

Till next time,

James Day – 1002467