viernes, 2 de marzo de 2018

Integrating DCCs into VFX pipelines: A generic Approach (I)

Context

Most VFX houses and boutiques tend to develop their pipelines around a core set of engines and digital content creation tools. The pipeline grows to deal with file/folder structures, asset tracking (adding to the equation any, usually web-based, digital asset management tool) across departments, some sort of data/metadata storage... normally combining serializables (JSON, XML, ..etc) and relational databases....

This implies a whole bunch of development work so when building a pipeline it's important from the technical point of view as well (not only art) to take into account the programming languages, available APIs, compiler/interpreters versions. It's a great deal. But once the choice is made, studios normally stick to them for several years and hence with the elected DCCs. Changing DCCs requires adapting the pipeline to support it and normally this task is parallelized so it doesnt have an impact on current productions.

Developing for a fixed set of DCCs implies one can spend time using their apis at full, code separate tools and make efforts to integrate them in the best artist-friendly way one is capable of. For example, if you plan to develop a working files manager for Maya and Nuke you may develop some core functionality that is common to both, but you wont trouble yourself much in making a unique tool talk to both. Instead, because you can afford it, you will most probably insert these core funcionality (because you hate to repeat yourself) in different widgets for each application (think about having the tool embedded in a tab).

But the approach has to be different when your plan is to integrate any possible existing 3D software out there in your pipeline!

It's easy to understand you cannot afford at first developing the inputs/outputs tools for a particular soft when you a) are part of a highly specialized and agile but not that numerous team, b) you dont have all the time in the world. So you need to take a more generic approach.

Generic Approach

How about developing your core pipeline tools as standalone instead of having them embedded in a specific app? You are not completely bound to the app specific programming language, you restrain only the specific atomic actions to the software.. the rest is handled from your core tools, and you are no more dependent on each Graphics Library API and versions. Imagine you could develop tools that work for the different Maya versions without relying on PySide and PySide2, integrate Cinema 4d (which doesnt have any Qt binding), Blender (which is Python 3), Photoshop and all the Adobe Suite....

In the approach we are taking currently we are developing our core tools in Python 2.7/PySide (because it is a widely used programming language in vfx and you can get away) and using different kinds of interprocess communication notably via socket connections.

But it's not gold all that shines... We have to face up some difficulties.

Some stones on the road are:

- When talking to apps outside the app you need a way to investigate how each app behaves for this. 
Ideally, one would want the DCC to come with an interpreter separated from the app executable so that you can feed the interpreter with your scripts and execute them all in the same app instance. That is not what you will encounter most of the time. The executable file is the interpreter as well and different apps can behave differently, even the same app in different operating systems! How do you handle this?

- DCCs come with programming apis in a varied bouquet of flavors. Blender alone is python 3, but a big part of the DCCs come with Python 2 and most recent versions havent made the switch yet; Adobe suite has a customized javascript called ExtendedScript... one of its kind!

- If you plan on communicating between your tools and the apps this communication implies the tools need to know which apps are running, and if this communication is made via sockets you start to think some kind of port manager and some sort of handshaking system is needed to be able to control the app and even be communicating to different instances of apps without executing each time a different process for your tools...

- also, for some apps it is not necessary to be running already inside an instance, you can just launch a standalone process and execute your scripts from there (some packages of the adobe suite) whereas for others you need to be inside the app. This means your system needs some flexibility to adapt to these features while staying still generic.

After this, it's clear that an emphasize on division and compartimentalization of each process acting as client and server is vital as well as handling a clean path for errors and exceptions.. (nobody wants your tools to freeze or stop working because a process raised an exception and you didnt let the others die....furthermore the whole operating system can be jeopardized with duplicate "ghost" processes!)

to be continued.......




sábado, 2 de septiembre de 2017

New Portfolio Website Logo!!!

Some months have past since my last post. Something unusual taking into account that my posting rate during latest couple of years has been on average 1 per month... 

That doesnt mean i havent been doing anything, conversely i ve been quite busy at work. This summer we put all our efforts on the presentation demo our company was showing at Siggraph 2017 which took place in Los Angeles the first week of August. Needless to say we were all surprised by the reception our product had among most industry fellows that showed some interest in us. They gave us some suggestions and improvements to make but the overall balance is pretty good and encouraging and this means....... a lot of work is waiting for us next months!!

So basically, if the summer lasts for 2 months, july and august, my holidays have been barely 2 weeks with the feeling that this short vacation is an "entre-temps" between two very stormy periods.

What happened at Siggraph is just the chick breaking the shell. Next is flying like an eagle!!

Anyways, i always try to dedicate time to doing some art, in the forms that i know. This could be, modeling in maya, zbrush, rigging, vfx... This time i had very pleasant moments playing with Photoshop. Since the times when i prepared the template for my website i wasnt happy with the logo i did (quick sketch in illustrator) but never had time nor the eagerness to improve it until now! :). 

Im not completely satisfied with the look yet, but it is surely an improvement!
Below you can compare both logos

Image A. Old JICEQ logo

Image B. New JICEQ logo

jueves, 2 de marzo de 2017

Simple VFX animation Rig

Some day back in my old days, in the beginning of my new 3d life i was exposed to help riggers and VFX department on how to rig physical properties like vector forces etc with control curves. And i was surprised what an easy task this was and how much of a problem this meant for some people. Although i understand it could have been ages for them since they left school (yes, this is not even university level maths!) you probably didnt get a scientific path in high school. Anyways, im far from being a math nerd myself, and if you are an artist not very familiar with vectors and matrices you will probably discover how surprisingly easy this is.

What we want is basically to control the direction of a vector, for example nucleus' gravity by means of the rotation controlled by a handle/curve control.

So basically this corresponds to rotating a vector by a rotation matrix!

vr = [M].vo

being "vo" the original vector direction and "vr" the rotated vector.


Basically you perform this operation with the vector product node and hook its output in this case right into the axis unit vector of a vortex field.

In the outliner you have this marvellous, beautiful arrow that serves as the possible curve control of a hipothetically more complex part of a rig, which indicates the initial direction of the vector.

And the results are here in a demo video using nParticles and field forces!!
God! that was quick!! i think this is the shortest blog entry ive done so far!!!! and in the middle of a working week!!!!

Hope you enjoyed!

 

sábado, 11 de febrero de 2017

PyQt Agnostic Launcher II

As part of the improvements i have been carrying on to the VFX pipeline we are developing i wanted to dig deeper into the problem of executing a PySide Maya tool outside the DCC, this is, as standalone, as well as being able to execute it inside Maya without doing any changes to the code. I already came out with a first version of the launcher which you can see in http://jiceq.blogspot.com.es/2016/08/pyqt-agnostic-tool-launcher.html  . This basically detects whether there isn´t a Qt host application running and if not, we assume it is Maya running.


 @contextlib.contextmanager  
 def application():    
   if not QtGui.qApp:  
     app = QtGui.QApplication(sys.argv)  
     parent = None  
     yield parent  
     app.exec_()  
   else:  
     parent = get_maya_main_window()  
     yield parent  


This works fine for the beginnings of a VFX pipeline, mostly based in Maya. But as soon as you face the need to integrate other heterogeneous packages (that ship with any version of Python and PyQt, which is becoming a standard in the industry. see: http://www.vfxplatform.com/  ) you will probably want to be able to, at least, run the same GUI embedded in different packages as well as standalone. So the need to distinguish between host apps arises and this first solution falls short.

One poor solution is to query the Operating System whether the maya.exe/maya.bin or nuke.exe/nuke.bin processes were running. In the following fashion, for example:

  
def tdfx_is_maya_process_running():    
   return tdfx_is_process_running('maya')
def tdfx_is_nuke_process_running():    
   return tdfx_is_process_running('nuke')
def tdfx_is_process_running(process_name):
   if os.platform() == 'windows':
      ''' specific os code here '''
      return is_running    
   elif os.platform() == 'linux':
      ''' specific os code here '''
      return is_running
   return False

And use this instead in the previous if-statement to retrieve the corresponding QApplication main window instance.

This is a very poor solution, if we can call it a solution. It doesnt work well: you may have an instance of Maya or Nuke running, but you may want to run in standalone mode your custom script from your preferred IDE. The above functions will both return True, first problem. Second, it will depend on order of evaluation, so if you are testing first "tdfx_is_maya_process_running()" then your launcher will attempt to get the Maya main window instance. And third and most important, your launcher wont work because internally it is detecting Maya, so it is reporting the presence of a QApplication.qApp pointer, when you are in standalone mode and there is no qApp pointer actually!

So basically, this approach is not valid. What we really want to query is not the processes running, but more specifically if my current script is running embedded in a qt host application or not, and if so, i want to be able to know which one is.

I googled a little bit and was surprised that some people had faced this problem and meanly resolved it their own -not so great and elegant- way. I just thought there must be some way in Qt to query the host application. I just cant acknowledge something so basic wasnt taken into account in the framework. After some looking into the documentation..eureka, i found this line:


QtWidgets.QApplication.applicationName()

which returns the name of the host application. In standalone Qt apps, it is a parameter that must be set by the programmer.


def tdfx_qthostapp_is_maya():
    return tdfx_qthostapp_is('Maya-2017')

def tdfx_qthostapp_is_nuke():
    return tdfx_qthostapp_is('Nuke')

def tdfx_qthostapp_is(dcc_name):
    from PySide2 import QtWidgets
    hostappname = QtWidgets.QApplication.applicationName()
    if hostappname == dcc_name:
        return True
    return False


Consequently my new contextmanager version takes the following form:

 @contextlib.contextmanager  
 def application():    
   if tdfx_qthostapp_is_none():  
     app = QtGui.QApplication(sys.argv)  
     parent = None  
     yield parent  
     app.exec_()  
   elif tdfx_qthostapp_is_maya():  
     parent = get_maya_main_window()  
     yield parent
   elif tdfx_qthostapp_is_nuke():
     parent = get_nuke_main_window()
     yield parent  

This is a step improvement towards easing the integration of other PyQt-API-based DCCs in a VFX pipeline and easing the task of the programmer, thus avoiding to produce GUI application-specific code. Nonetheless, there is still some work to do that i will deal with when i have more time. This is, making the GUI code fully portable between PySide2 and PySide (or Qt4 and Qt5). There are already some solutions out there like the "Qt.py module" that intends to abstract the GUI from the Qt4 to Qt5 big jump in recent Maya 2017 Python API.


domingo, 29 de enero de 2017

Thinking In Design

This week ive started to refactor our pipeline code. We are migrating to Maya 2017 among other things and i wasnt proud of how the development process was held during the last 7 months. To understand it a bit, 7 months ago we were facing a hurry in all aspects. We needed to produce a teaser in barely 4-5 months of strong, intense workload because we bet everything to reach to the AFM with something cool enough to raise some funds and produce the desired movie. The working conditions in terms of organization and qualified staff were a disadvantage. Something everyone of us had had to bear with. Nevertheless there were big pros, we all had passion and were totally committed to the project. I was going to say "Luckily the project worked out really well", but it was due to all our efforts and all the muscle we put into.

Anyways, from the point of view of the pipeline, which is what interests me here, besides the lack of organization we were dealing with a new Digital Asset Management tool where there is little documentation, so at the beginning we didnt have a precise idea of what where the capabilities, the pipeline was being developped at the same time the production started.... Briefly, i had no time to think properly about a good design. Don't misunderstand me, the code produced at that moment was completely functional and i have some testimonies claiming the tools were working well. But that step was necessary to explore the needs and can dos of the pipeline we were conceiving. Now we know how some things were done, we can improve them based on something that already works.

A REFACTORING EXAMPLE

As a little example, there is always the need to use a class that manages some common parameters with some common methods and functions. The way we did it first is just define a Singleton class, inherit from it and start to add parameters and their getters and setters, which in Python can be defined as @property . The manipulation of this data consists among other things, of storing their values , and loading them into memory, by means of some kind of persistence system. It could be a database or something as simple as a text file.

But this approach is really bad design because each time you define a new parameter, you need to change all the methods that input and output the parameters.... Really not very scalable!

Another constraint we didnt take into account is some of the parameters could be classified together. This is, they were related and could be interesting to group them. Some of them dont mean anything on their own if they are not accompanied with their corresponding mate.For example, a login consists of the username and the password. Having the username does not make sense if you havent defined also the password. Under the preceding approach every parameter is independent from the others. And there is no trace of those relationships.

Under those conditions i redesigned the system by making heavy use of inheritance. The related paraemeters could be grouped under a specific class wich derives from the Group Class called here "Section". This class is the atomic class responsible for managing a group of related parameters. So each time i want to expand with a new group of parameters, i only have to define a derived class that inherits from Section and define the parameter keys. Anyother functionality is already present in the base class.

Moreover, i can force from the base classe that the derived ones implement a PARAMS (param1,param2,etc) tuple which are automatically managed. This way i economize work as well as i ensure nobody misuses the class and understands how it is made. It is the same mechanism when we enforce the implementation of an abstract method in the class that inherits from the interface by raising a NotImplemented Exception.

The result is a much more easy to use and therefore extendable manager. Each time i want to create a new group of parameters i just need to define them in a new ConcreteSection class and no more worries than registering the section in the __init__ method. No any other changes to the manager!!

Enough talking, here is a UML class diagram exposing the generic final design.





viernes, 25 de noviembre de 2016

TACTIC Python API Tweak: Hack To Report Copied Byte Amount To Qt Widget

During the development of some Maya Tools that used the Southpaw Tactic Python API I bumped into the following, at first simple, problem: I wanted to give a visual report of the uploading progress process. Each artist had to check-in their work to the asset management system via internet.

The first version of the tool only gave report of the progress by means of a progress bar that visually was enough to notify when the upload had finished. This worked ideally for multiple tiny files. But soon Groom & Hair artists, as well as VFX artist where generating a lot of huge simulating data that needed to be uploaded.

We were working remotely and uploading the artist's work could easily take a couple of hours. The first approach was to use HTTP protocol to transfer those huge amounts of files. There we found a bug in the Python API of Tactic v4.4.04 that limited the file size to 10 MB (10*1024*104 bytes) that forced us to look in the documentation and upgrade to a newer version of Tactic that had this bug fixed. But that's another story.

What interests me here is that the Python Tactic API upload functions dont give any report of the number of bytes uploaded. It only gives a report of when an entire file has been checked-in, this is, by doing a Piecewise Check-In.

So we changed the upload method to use Tactic's handoff dir which consists basically on replacing the HTTP protocol by a protocol like CIFS or NFS where you just perform a copy from your local to the server's directory just like you would between two directories on your local filesystem.

That was the first step.

Now once, definitely using the most powerful transfer method. I only needed to have a look at the API. The "tactic_client_stub.py" module and the "TacticServerStub" class. The Piecewise Check-in works as explained here.



You can see that the API uses the "shutil.copy" and "shutil.move" methods to upload. I cannot tweak the "shutil" module, since it's a built-in one that comes by default with the Maya Python Interpreter. But i can build my own :))!!

My goal is to be able to report the amount of bytes transferred using a Qt Widget so basically i have to simulate a Signal/Slot behaviour from the copy/move methods. It would be nice if i could add a callback inside that method that triggered a Qt Signal, isnt it?!


A LEAST INTRUSIVE SOLUTION



The shutil module uses a lot of different methods to copy files considering the metadata, creation and last modification time, user owner and group owner and the permissions bits, etc. It is explained here.

All of them at last, call the "copyfileobj" method. That's the method i want to tweak.

Now, what kind of function can trigger a Qt Signal?? what are its requisites??

I remembered all Qt Classes inherit from the QObject Class.. A quick look at the PyQt Documentation explains it.


"The central feature in this model is a very powerful mechanism for seamless object communication called signals and slots"

So basically, the only thing i need is to define a class that inherits from QObject, define a custom signal and have the callback method to emit the signal!!. The following is not production code, it is just an example of how it would work.


All that is left is to catch the signal in the proper QWidget, with this information you can compute the time left for the upload to finish and hence give an estimate based on internet speed.

This solution is simple, straightforward and doesnt imply rewriting the TacticServerStub Class. Maybe if i find myself in the need of tweaking again i would consider writing my own TacticServerStub class.

Comments & Critics Welcome!!

viernes, 18 de noviembre de 2016

Animatable Pivot - Rolling Cube Demo

INTRODUCTION

During the production of "Deep" the movie, the rigging department had to design the rigs of ice cubes that some characters were pushing. In order to achieve this rolling cube the rig needed to dynamically change the rotation pivot.

I didnt have time to look further into it so i couldnt be of much help at the time. But since one of my interests is rigging i decided to dig deeper when i had enough time.

If you do a google search the problem of rolling cubes is something most Riggers and Character TDs have faced anytime. One of the most interesting articles on how it can be done is this one. But i wanted to do it my own way and in different ways. One using the Node Editor, Matrix Multiplication and Geometric Transformations and the other, by using constraints. I'll show both here.

The first approach is simple: animate the cube setting keys in the Rotate Pivot X,Y,Z  attributes. If you do that, you will notice that it doesnt work. Just when you change pivot, the cube suffers a translation due to the fact that the rotation is applied again but with the new pivot. This is, it doesnt remember the rotation you performed with the previous pivot.

So the solution is to calculate the translation difference between pivots and apply it to the top controller.


I started with the outliner configuration you can see above. This configuration is generic. It works for all kinds of meshes and any number of pivots, The cube pCube1 can be substitued by whatever mesh you want. Here to illustrate better, i have used a NURB Surface and positioned one in each corner of the cube.

The main transform group has an enum attribute to select the pivot. Once chosen, we calculate each pivots world position from the hierarchy and its local rotate pivot position. This is important because if you just simply use the pivots world rotate position you will cause a cycle in the transformations as it changes every time you rotate. The local rotate pivot position doesnt. So it becomes necessary to compute the world position traversing the hierarchy bottom-up. 

Here is the Node Graph.



Another way of doing it is, instead of using matrix multiplication, using the tools maya provides, this is by using constraints. We constrain from for example a locator to all the NURBS pivots. And instead of  using the Node Editor we manipulate the weights of the constraint of each of the pivots.

Both alternatives make use of scriptJobs. They are both here provided here.
The first one calculates the pivots difference in an accum buffer.



The second one modifies the contrain weights and applies the calculation to the Rotate Pivot of the transform group.



FURTHER DEVELOPMENT

I just wanted to play a bit with those concepts and figure out how i would tackle with the problem myself. Needless to say it still needs to be organised as most rigs are, which means being able to set keys in a Curve Control. This is not a big change though.

As I previously said, this configuration is generic. it would work for any kind of mesh and any number and distribution of pivots.

Here is the final video file.