INTRODUCTION
The idea of opening up my own github repository has been around my head for some years now. I ve been putting it off for a long time, the reason being i ve always had my personal backup of script code and utilities ive developped through the years at home but the fear of several hard disks crashing including my home NAS finally made up my mind and give github a shot.
It's purpose is primarily for personal use, im not too keen on spending time uploading the unit tests or even commenting unless it is fuzzy enough not to understand.
FIRST PROJECT: SERIALIZER
https://github.com/juan-cristobal-quesada/serializer
In python we have several built-in methods to serialize objects: json, pickle, .. etc. The JSON module serializes only basic types and some built-in datastructures whereas Pickle/cPickle attempts to serialize all custom class objects.
There are several further modules implemented in the python repository that intend to solve different issues with serialization. My current implementation relies on cPickle because of it speed but leverages the final serialization object by limiting the type of variables that are serializable. This comes specially handy if you intend to send the serialized object over a network. Fine tuning which objects get serialized and which dont allows more control over the size.
The serializer in this project allows for basic types serialization including basic lists and dicts datastructures which covers pretty much the core data of the objects we needed to send as well as a special class called Serializable intended for any custom class to inherit from in order to be serialized. In the process the path of the module is appended so that it can be correctly reconstructed at the endpoint.
The resulting object is then base 64 encoded so that it an be ascii compliant, for example allowing to be passed to another subprocess as an environment variable.
FURTHER IMPROVEMENTS
- extend the base serialization and add a readable format such as json notably for debug purposes.
- add a zip compression functionality
- add an encryption functionality so that the serialized object is protected when traveling through the network.
- add support for more built-in basic types such as OrderedDict and others.
- add unit testing cases to showcase the usage.
viernes, 10 de mayo de 2019
sábado, 6 de octubre de 2018
Oil Paintings (I)
I just wanted to share one of my firsts oil painting exercises. This was done a long time ago. I personnally like the contrast between the hard blue shadow and the grey-yellowish texture of the background where the spot light hits more intensely.
Also, i like the tones of the pink glass and the highlights of the vase. Given my yet poor expertise and as a fresher, i was quite satisfied with the results.
In my free time im now involved in some anatomy studies but as soon as i finish i will try another one inspired by the hudson river school. One of my art teachers discovered this movement to me, and im really drawn to those wild almost fantasy landscapes of the american colonies.
viernes, 2 de marzo de 2018
Integrating DCCs into VFX pipelines: A generic Approach (I)
Context
Most VFX houses and boutiques tend to develop their pipelines around a core set of engines and digital content creation tools. The pipeline grows to deal with file/folder structures, asset tracking (adding to the equation any, usually web-based, digital asset management tool) across departments, some sort of data/metadata storage... normally combining serializables (JSON, XML, ..etc) and relational databases....
This implies a whole bunch of development work so when building a pipeline it's important from the technical point of view as well (not only art) to take into account the programming languages, available APIs, compiler/interpreters versions. It's a great deal. But once the choice is made, studios normally stick to them for several years and hence with the elected DCCs. Changing DCCs requires adapting the pipeline to support it and normally this task is parallelized so it doesnt have an impact on current productions.
Developing for a fixed set of DCCs implies one can spend time using their apis at full, code separate tools and make efforts to integrate them in the best artist-friendly way one is capable of. For example, if you plan to develop a working files manager for Maya and Nuke you may develop some core functionality that is common to both, but you wont trouble yourself much in making a unique tool talk to both. Instead, because you can afford it, you will most probably insert these core funcionality (because you hate to repeat yourself) in different widgets for each application (think about having the tool embedded in a tab).
But the approach has to be different when your plan is to integrate any possible existing 3D software out there in your pipeline!
It's easy to understand you cannot afford at first developing the inputs/outputs tools for a particular soft when you a) are part of a highly specialized and agile but not that numerous team, b) you dont have all the time in the world. So you need to take a more generic approach.
Generic Approach
How about developing your core pipeline tools as standalone instead of having them embedded in a specific app? You are not completely bound to the app specific programming language, you restrain only the specific atomic actions to the software.. the rest is handled from your core tools, and you are no more dependent on each Graphics Library API and versions. Imagine you could develop tools that work for the different Maya versions without relying on PySide and PySide2, integrate Cinema 4d (which doesnt have any Qt binding), Blender (which is Python 3), Photoshop and all the Adobe Suite....
In the approach we are taking currently we are developing our core tools in Python 2.7/PySide (because it is a widely used programming language in vfx and you can get away) and using different kinds of interprocess communication notably via socket connections.
But it's not gold all that shines... We have to face up some difficulties.
Some stones on the road are:
- When talking to apps outside the app you need a way to investigate how each app behaves for this.
Ideally, one would want the DCC to come with an interpreter separated from the app executable so that you can feed the interpreter with your scripts and execute them all in the same app instance. That is not what you will encounter most of the time. The executable file is the interpreter as well and different apps can behave differently, even the same app in different operating systems! How do you handle this?
- DCCs come with programming apis in a varied bouquet of flavors. Blender alone is python 3, but a big part of the DCCs come with Python 2 and most recent versions havent made the switch yet; Adobe suite has a customized javascript called ExtendedScript... one of its kind!
- If you plan on communicating between your tools and the apps this communication implies the tools need to know which apps are running, and if this communication is made via sockets you start to think some kind of port manager and some sort of handshaking system is needed to be able to control the app and even be communicating to different instances of apps without executing each time a different process for your tools...
- also, for some apps it is not necessary to be running already inside an instance, you can just launch a standalone process and execute your scripts from there (some packages of the adobe suite) whereas for others you need to be inside the app. This means your system needs some flexibility to adapt to these features while staying still generic.
After this, it's clear that an emphasize on division and compartimentalization of each process acting as client and server is vital as well as handling a clean path for errors and exceptions.. (nobody wants your tools to freeze or stop working because a process raised an exception and you didnt let the others die....furthermore the whole operating system can be jeopardized with duplicate "ghost" processes!)
to be continued.......
sábado, 2 de septiembre de 2017
New Portfolio Website Logo!!!
Some months have past since my last post. Something unusual taking into account that my posting rate during latest couple of years has been on average 1 per month...
That doesnt mean i havent been doing anything, conversely i ve been quite busy at work. This summer we put all our efforts on the presentation demo our company was showing at Siggraph 2017 which took place in Los Angeles the first week of August. Needless to say we were all surprised by the reception our product had among most industry fellows that showed some interest in us. They gave us some suggestions and improvements to make but the overall balance is pretty good and encouraging and this means....... a lot of work is waiting for us next months!!
So basically, if the summer lasts for 2 months, july and august, my holidays have been barely 2 weeks with the feeling that this short vacation is an "entre-temps" between two very stormy periods.
What happened at Siggraph is just the chick breaking the shell. Next is flying like an eagle!!
Anyways, i always try to dedicate time to doing some art, in the forms that i know. This could be, modeling in maya, zbrush, rigging, vfx... This time i had very pleasant moments playing with Photoshop. Since the times when i prepared the template for my website i wasnt happy with the logo i did (quick sketch in illustrator) but never had time nor the eagerness to improve it until now! :).
Im not completely satisfied with the look yet, but it is surely an improvement!
Below you can compare both logos
Image A. Old JICEQ logo
Image B. New JICEQ logo
jueves, 2 de marzo de 2017
Simple VFX animation Rig
Some day back in my old days, in the beginning of my new 3d life i was exposed to help riggers and VFX department on how to rig physical properties like vector forces etc with control curves. And i was surprised what an easy task this was and how much of a problem this meant for some people. Although i understand it could have been ages for them since they left school (yes, this is not even university level maths!) you probably didnt get a scientific path in high school. Anyways, im far from being a math nerd myself, and if you are an artist not very familiar with vectors and matrices you will probably discover how surprisingly easy this is.
What we want is basically to control the direction of a vector, for example nucleus' gravity by means of the rotation controlled by a handle/curve control.
So basically this corresponds to rotating a vector by a rotation matrix!
vr = [M].vo
being "vo" the original vector direction and "vr" the rotated vector.
Basically you perform this operation with the vector product node and hook its output in this case right into the axis unit vector of a vortex field.
In the outliner you have this marvellous, beautiful arrow that serves as the possible curve control of a hipothetically more complex part of a rig, which indicates the initial direction of the vector.
And the results are here in a demo video using nParticles and field forces!!
God! that was quick!! i think this is the shortest blog entry ive done so far!!!! and in the middle of a working week!!!!
Hope you enjoyed!
sábado, 11 de febrero de 2017
PyQt Agnostic Launcher II
As part of the improvements i have been carrying on to the VFX pipeline we are developing i wanted to dig deeper into the problem of executing a PySide Maya tool outside the DCC, this is, as standalone, as well as being able to execute it inside Maya without doing any changes to the code. I already came out with a first version of the launcher which you can see in http://jiceq.blogspot.com.es/2016/08/pyqt-agnostic-tool-launcher.html . This basically detects whether there isn´t a Qt host application running and if not, we assume it is Maya running.
@contextlib.contextmanager
def application():
if not QtGui.qApp:
app = QtGui.QApplication(sys.argv)
parent = None
yield parent
app.exec_()
else:
parent = get_maya_main_window()
yield parent
This works fine for the beginnings of a VFX pipeline, mostly based in Maya. But as soon as you face the need to integrate other heterogeneous packages (that ship with any version of Python and PyQt, which is becoming a standard in the industry. see: http://www.vfxplatform.com/ ) you will probably want to be able to, at least, run the same GUI embedded in different packages as well as standalone. So the need to distinguish between host apps arises and this first solution falls short.
One poor solution is to query the Operating System whether the maya.exe/maya.bin or nuke.exe/nuke.bin processes were running. In the following fashion, for example:
def tdfx_is_maya_process_running():
return tdfx_is_process_running('maya')
def tdfx_is_nuke_process_running():
return tdfx_is_process_running('nuke')
def tdfx_is_process_running(process_name):
if os.platform() == 'windows':
''' specific os code here '''
return is_running
elif os.platform() == 'linux':
''' specific os code here '''
return is_running
return False
This is a very poor solution, if we can call it a solution. It doesnt work well: you may have an instance of Maya or Nuke running, but you may want to run in standalone mode your custom script from your preferred IDE. The above functions will both return True, first problem. Second, it will depend on order of evaluation, so if you are testing first "tdfx_is_maya_process_running()" then your launcher will attempt to get the Maya main window instance. And third and most important, your launcher wont work because internally it is detecting Maya, so it is reporting the presence of a QApplication.qApp pointer, when you are in standalone mode and there is no qApp pointer actually!
So basically, this approach is not valid. What we really want to query is not the processes running, but more specifically if my current script is running embedded in a qt host application or not, and if so, i want to be able to know which one is.
I googled a little bit and was surprised that some people had faced this problem and meanly resolved it their own -not so great and elegant- way. I just thought there must be some way in Qt to query the host application. I just cant acknowledge something so basic wasnt taken into account in the framework. After some looking into the documentation..eureka, i found this line:
QtWidgets.QApplication.applicationName()
which returns the name of the host application. In standalone Qt apps, it is a parameter that must be set by the programmer.
Consequently my new contextmanager version takes the following form:
def tdfx_qthostapp_is_maya():
return tdfx_qthostapp_is('Maya-2017')
def tdfx_qthostapp_is_nuke():
return tdfx_qthostapp_is('Nuke')
def tdfx_qthostapp_is(dcc_name):
from PySide2 import QtWidgets
hostappname = QtWidgets.QApplication.applicationName()
if hostappname == dcc_name:
return True
return False
Consequently my new contextmanager version takes the following form:
@contextlib.contextmanager
def application():
if tdfx_qthostapp_is_none():
app = QtGui.QApplication(sys.argv)
parent = None
yield parent
app.exec_()
elif tdfx_qthostapp_is_maya():
parent = get_maya_main_window()
yield parent
elif tdfx_qthostapp_is_nuke():
parent = get_nuke_main_window()
yield parent
This is a step improvement towards easing the integration of other PyQt-API-based DCCs in a VFX pipeline and easing the task of the programmer, thus avoiding to produce GUI application-specific code. Nonetheless, there is still some work to do that i will deal with when i have more time. This is, making the GUI code fully portable between PySide2 and PySide (or Qt4 and Qt5). There are already some solutions out there like the "Qt.py module" that intends to abstract the GUI from the Qt4 to Qt5 big jump in recent Maya 2017 Python API.
domingo, 29 de enero de 2017
Thinking In Design
This week ive started to refactor our pipeline code. We are migrating to Maya 2017 among other things and i wasnt proud of how the development process was held during the last 7 months. To understand it a bit, 7 months ago we were facing a hurry in all aspects. We needed to produce a teaser in barely 4-5 months of strong, intense workload because we bet everything to reach to the AFM with something cool enough to raise some funds and produce the desired movie. The working conditions in terms of organization and qualified staff were a disadvantage. Something everyone of us had had to bear with. Nevertheless there were big pros, we all had passion and were totally committed to the project. I was going to say "Luckily the project worked out really well", but it was due to all our efforts and all the muscle we put into.
Anyways, from the point of view of the pipeline, which is what interests me here, besides the lack of organization we were dealing with a new Digital Asset Management tool where there is little documentation, so at the beginning we didnt have a precise idea of what where the capabilities, the pipeline was being developped at the same time the production started.... Briefly, i had no time to think properly about a good design. Don't misunderstand me, the code produced at that moment was completely functional and i have some testimonies claiming the tools were working well. But that step was necessary to explore the needs and can dos of the pipeline we were conceiving. Now we know how some things were done, we can improve them based on something that already works.
A REFACTORING EXAMPLE
As a little example, there is always the need to use a class that manages some common parameters with some common methods and functions. The way we did it first is just define a Singleton class, inherit from it and start to add parameters and their getters and setters, which in Python can be defined as @property . The manipulation of this data consists among other things, of storing their values , and loading them into memory, by means of some kind of persistence system. It could be a database or something as simple as a text file.
But this approach is really bad design because each time you define a new parameter, you need to change all the methods that input and output the parameters.... Really not very scalable!
Another constraint we didnt take into account is some of the parameters could be classified together. This is, they were related and could be interesting to group them. Some of them dont mean anything on their own if they are not accompanied with their corresponding mate.For example, a login consists of the username and the password. Having the username does not make sense if you havent defined also the password. Under the preceding approach every parameter is independent from the others. And there is no trace of those relationships.
Under those conditions i redesigned the system by making heavy use of inheritance. The related paraemeters could be grouped under a specific class wich derives from the Group Class called here "Section". This class is the atomic class responsible for managing a group of related parameters. So each time i want to expand with a new group of parameters, i only have to define a derived class that inherits from Section and define the parameter keys. Anyother functionality is already present in the base class.
Moreover, i can force from the base classe that the derived ones implement a PARAMS (param1,param2,etc) tuple which are automatically managed. This way i economize work as well as i ensure nobody misuses the class and understands how it is made. It is the same mechanism when we enforce the implementation of an abstract method in the class that inherits from the interface by raising a NotImplemented Exception.
The result is a much more easy to use and therefore extendable manager. Each time i want to create a new group of parameters i just need to define them in a new ConcreteSection class and no more worries than registering the section in the __init__ method. No any other changes to the manager!!
The result is a much more easy to use and therefore extendable manager. Each time i want to create a new group of parameters i just need to define them in a new ConcreteSection class and no more worries than registering the section in the __init__ method. No any other changes to the manager!!
Enough talking, here is a UML class diagram exposing the generic final design.
Suscribirse a:
Entradas (Atom)