lunes, 18 de abril de 2016

FFMPEG and Multiprocessing

We are at the edge of the end of production here, very few stand still and with them very much of our daily joy because "there is no good or bad company", it's the people that conform and that you have a continuous treat with that count and make the working environment such a great place.

Anyways im gonna talk (as usual) about the last tool i ve had to code at work. Apparently, there s been a mismatching of shots between the two studios involved and from a production point of view they needed to have the whole movie in playblast sequentially so that they had the screen split in two, on top of it the last anim playblast and at the bottom the last refine so that they could compare and make sure the outer studio was getting the last version for lighting etc....

My first thought was to have a look at Adobe Premiere SDK and see if by any chance it had any python API i could play with. 

After a bit of research i found that there was no way to import an xml file with shots and duration and automatically convert it to a final video. Also the only thing you can do with premiere is plugin development with C++ at the "filters level" which means it is not as tweakable as Maya by any far. It was too much overkill for my  needs.

Then somehow i started to look for tools in linux and i bumped into ffmpeg. So surprised and amazed it was not my first choice! Now surely i would recommend it to anyone having to play with video compositing and mixing.

Now i can start to code!

TOOL SKELETON

First iterate through the anim and refine folders. As there is no conflict and no need to share data this could be easily parallelized. Each process would fill a dictionary where the key is like "ACT0X_SQ00XX_SH00XX" and the value the complete filepath of the most recent file.

       
def return_file_dict(root,queue):
    '''
    iterate through each filesystem branch and fill the dictionary, finally put it in the multiprocess safe queue
    '''
    queue.put(root)
    queue.put(file_dict)

def main():
    process_list = []
    queue = Manager().Queue()
    
    for root, dictionary in zip([DST_TMP_LAST_ANIM,DST_TMP_LAST_CROWD_OR_REFINE],[last_anim_dict,last_crowdrefine_dict]):
        p = Process(target=return_file_dict,args=(root,queue,))
        process_list.append(p)
        p.start()
    
    for p in process_list:
        p.join()
    
    '''
    Rescue both dictionaries and merge top/bottom with ffmpeg
    '''
       
 


After these all i needed was to filter both dictionaries and merge the two playblasts corresponding to a given shot/entry in the dict with ffmpeg.

Now these would open a gnome-terminal for each command. So the next thought was to pipe all the commands to a string which then would be executed in a single call to subprocess.call.

But there was a problem: there is a limit in the number of characters you can send as a command to subprocess.call. This was a good idea in the sense that it would only require a call and all would happen in the same terminal/linux process.

The next logical step was to say Ok i can't send all the commands as a string but i can dump the string to a shell script file and execute that shell script from within the subprocess call!!

       
    #
    # compound all the shell script commands into command_element_string
    #

    with open(FFMPEG_COMMANDS, 'w') as f:
        f.write(command_element_string)

    command = 'sh ' + FFMPEG_COMMANDS
    subprocess.call(['gnome-terminal','-x','bash','-c',command],shell=False,env=os.environ.copy())
    
 


FFMPEG SHELL SCRIPT CALLED FROM SUBPROCESS

       
command_element_string +='ffmpeg -y -i ' + last_anim_filepath + ' -i '+ last_crowdrefine_filepath + ' -filter_complex "[0:v]scale=w=999:h=540[v0];[1:v]scale=w=999:h=540[v1];[v0][v1]vstack=inputs=2[v]" -map "[v]" -map $RESULT:a -ac 2 -b:v 4M ' + output_filepath +';\n'
    
 


This would force the resolution to be w=999 h=540 of each of the videos we vertically stack. We Force it because if the resolutions dont match the conversion will fail.

Also another important comand here is the "$RESULT" value which in this case must be 0 or 1 depending on the audio track we choose to be embedded in the output file.

This can vary since there were playblasts from anim as well as from refine that were missing the audio. $RESULT is the result of doing and ffprobe test to each one of the files to ask for audio info.

The only unavoidable case left is when neither of the two files has an audio track, in which case the conversion fails. So far i could take care of this as well but i havent found yet this case so most probably not gonna treat it.

This is the embedded function in the shell script:

       
    command_element_string = 'function ttl_ffprobe()\n'
    command_element_string += '{\n'
    command_element_string += 'RESULT_ANIM="" ;\n'
    command_element_string += 'RESULT_REFINE="" ;\n'
    command_element_string += 'RESULT_ANIM=$(ffprobe -i $1 -show_streams -select_streams a -loglevel error) ;\n'
    command_element_string += 'RESULT_REFINE=$(ffprobe -i $2 -show_streams -select_streams a -loglevel error) ;\n'
    command_element_string += 'CHANNEL_SELECTION=0 ;\n'
    command_element_string += 'if [ -z "$RESULT_ANIM" ]\n'
    command_element_string += 'then\n'
    command_element_string += '\tCHANNEL_SELECTION=1\n'
    command_element_string += 'fi\n'
    command_element_string += 'return $CHANNEL_SELECTION\n'
    command_element_string += '}\n'
    
 

Once we have side by side all the playblasts anim and refine, all that is left is to merge all of them into the final sequence/movie which can be easily done with the "cat" command and properly chosen container. I refer you to the documentation: https://ffmpeg.org/ffmpeg.html











No hay comentarios:

Publicar un comentario