How to Get PCP on the Web

For years, I’ve watched our video team do amazing work shooting, editing, and encoding video for the web. I think most production companies would be shocked at how much high quality work our team produces with so few staff, a tight budget, and tighter time constraints.

When I look closely at how they do what they do, I’m impressed and just a little frightened at how many manual steps are involved in getting a video online. Manual steps that take time, attention to detail, expertise, and are ripe for mistakes.

I try to automate everything I touch. I can’t help it. It’s the curse of being a programmer.

I’ve looked at our video process a few times over the years, always thinking about how we might eliminate some of those manual steps. Aside from my instinctive cringing at anything that looks repetitive to me, the work we’re doing in online learning (and the web in general) is integrating more and more student created or supplied video. The options for that have not been the best. If there were no issues of copyright or privacy, we could just tell students to upload their videos to youtube and be done with it. Unfortunately, that’s usually not possible, so we end up with students having to bring their videos to us, on flash drives or SD cards or floppy disks or punchcards or whatever they have and our video team collects them, grabs the files and processes and uploads them for them. This is a big drain on our resources and takes their time away from the shooting and editing work that actually makes better use of their skills and talents. It’s also inconvenient for students to have to come to Butler Library during business hours to drop off their files, pick up their drives later, and so on. If they have to upload a video for a class assignment, they’d be much happier if they could just upload it directly themselves at 2am the night before the assignment is due (because we all know that that’s how it works). Finally, if every student supplied video has to come through our video team, it puts a severe limit on how many video related assignments can realistically happen at once and how large of a class can run them.

When I’ve looked at this stuff in the past, I’ve usually run into a wall pretty early on. This video encoding stuff is hard. All the possible input formats, codecs, bitrates, and aspect ratios are a royal pain. The tools that our video team uses to deal with those are generally OS X desktop apps that expect a user to point and click and aren’t that concerned with exposing an API to script. Then the delivery options as far as streaming servers, authentication schemes and podcasting tools, each with their own picky, proprietary interfaces just multiply the complexity.

If all that weren’t enough, integrating anything involving video with our web application world has a big, fat elephant of a problem: video is big. The files are orders of magnitude bigger than the images or text or rows of data in databases that we’re used to dealing with in our web apps. The web servers we run don’t have much disk space available on them (drives for those servers are much more expensive than consumer hard drives) and would fill up immediately if we put videos on them. Encoding jobs on long, high quality videos can take hours. Building web applications, I’m used to worrying about whether a request is taking too many milliseconds to complete. If an application takes more than a second or two to respond, users complain.

The desire to get our video work more tightly integrated with our web applications and smoothly support user supplied video content doesn’t go away though. Some technologies and architectural patterns that have come out and that I hope to write about here in the future have offered solutions for the size related problems and we recently realized that enough of those impediments have eroded that it was worth investing some effort into the problem again.

In the last few years, improvements have also been made in our video process thanks to a lot of work building on Apple’s Podcast Producer software and it’s ability to manage custom workflows. Podcast Producer (or “PCP” as we like to abbreviate it) integrates with most of the other video tools we use and has allowed us to largely automate the encode and upload process in many cases and can manage distributing the workload across a grid of desktop and lab machines.

PCP, being from Apple, is very OS X desktop specific though, which doesn’t lend itself well to integrating with our web applications running on Linux servers. Getting videos into PCP typically requires running OS X, installing an application, and running it. There is a web interface, Kino, but it’s pretty locked down, unintuitive, and won’t even load in a lot of browsers.

However, to me, that web interface, limited as it is, was the crack in the armor of the video problem that I’d been waiting for.

However we were going to go about working video upload support into our web applications, we knew we weren’t going to abandon all the work that had been done with PCP. The workflows that had been developed for it handled our encoding needs and were robust and debugged. What I needed was just a way to get our web applications to be able to communicate with PCP.

So I got out my dissecting tools and started figuring out how to talk to Kino from Python.

The result is a little library called Angeldust which we have released in case anyone else needs to do something similar.

The code is relatively short but it took quite a while to get there. Kino clearly was not intended to be used in this way and fought me every step of the way.

It was undocumented and used some odd HTTP headers for SSL stuff (I don’t remember the exact details now, but it was why it wouldn’t even load in most non-Apple browsers). It used a weird combination of HTTP Basic Auth as well as cookie based login sessions. The interface for the site was built, not as plain HTML, but as an almost completely client-side JavaScript application (similar to GMail) constructed from obfuscated and minified JavaScript. This made it a pain to figure out what form parameters were being used and when they were being passed back and forth to the backend.

When submitting a video to be processed by a workflow, Kino broke it into two steps. First, you would select the workflow, then you would upload the video with its title and description. It does it in two separate requests to the backend with the state stored in the session. This was easy enough to figure out and deal with, but this kind of stateful interface is annoying and fragile since the underlying HTTP is stateless. An application trying to interface with Kino has to make two separate requests to accomplish one action: first set the workflow, then submit the video. This opens it up to race condition bugs in a concurrent environment if one isn’t very careful.

The Kino interface has a few more adminstrative features, but the only functions we really needed were to get a list of the PCP workflows that are available, listed by their title and UUID, and to submit a video to one of those workflows. This is the functionality that angeldust exposes.

Using it is fairly straightforward:

from angeldust import PCP
pcp = PCP("https://mykinoserver/url/","username","password")
for workflow in pcp.workflows():
    print "workflow '%s' has UUID %s" % (wf['title'],wf['uuid'])
pcp.upload_file(open("some_video.avi","rb"),"some_video.avi","uuid-of-workflow","title","description")

angeldust handles everything else for you. It’s careful to stream the video upload in chunks instead of trying to read the entire file into memory first.

Unfortunately, there are still things that angeldust can’t really do and won’t be able to without some changes to Kino.

First, filename, title, and description are the only metadata fields available. PCP has the ability to deal with more metadata, but Kino actively ignores anything except those couple fields. Actually, we’ve found that Kino also loses the original filename as soon as the video is uploaded, before it makes it into the PCP workflows. Aside from preventing us from exploring some more interesting automatic publishing use-cases, It makes it hard to even track an uploaded video through it’s whole lifecycle. Filenames and titles aren’t typically enough to uniquely identify a video, so we end up having to insert unique ids into those fields in ad-hoc and brittle ways.

Angeldust is the first, key piece in getting PCP to integrate with web applications for student supplied video and we hope that others will find it as useful as we do.