by

Hacking Media Manager with the Canvas API

Reading Time: 9 minutes

In July of 2015 I had the opportunity to participate in the first ever SDL Hackathon. The rules were relatively simple: Pick an SDL product, do something cool with it.

In recent months I’ve been learning quite a bit about the Canvas API, an interface that’s part of the new HTML5 specifications. So, I decided to see if I could present videos from SDL’s Media Manager tool using the Canvas API. As it turns out, not only can you present videos using the canvas API, you can win a hackathon as a result.

What we’ll do here is walk through what was submitted, how it works, and the opportunities it creates.

The submission

If you wanted to view the full submission, it’s over on Github. The submission consisted of three components:

  1. A few HTML pages for demo purposes
  2. A stylesheet with spiffy effects
  3. A jQuery plugin

Why Hack Media Manager?

A perfectly reasonable question is, “Why do you need a media manager plugin?”

You don’t. SDL provides a full-featured player by default.

But that player doesn’t help a designer make videos more… designy. It just shows a video on a page. These days we’re seeing companies like The Boston Consulting Group overlay content on a video that’s wrapped inside of a vertical slider; there’s a lot of engaging ways we can present videos.

Now that we can put video on a web page without using Flash, designers and content producers alike are using video as both content and presentation. But SDL’s Media Manager only allows us to present videos as content. Media Manager doesn’t enable content authors to make a video part of the presentation layer or to even change the presentation of an existing video. We want something that gives strong presentation controls over a video.

Presenting Videos

The submission was based entirely on the idea that a video doesn’t have to just be content; we want to be empowered to add effects for the sole reason of improving a design. But a few shiny effects aren’t enough. A content author needs to be able to control how a video is presented. So that led to a few basic requirements for what should be delivered:

  • A non-developer should be able to easily make changes
  • It should provide familiar effects
  • It should provide controls that don’t yet exist

Making it work

Creativitiy is seldom in isolation, and the Media Manager plugin is no exception. I was researching HTML5’s canvas API when I discovered a small demo where Remy Sharp captures a video stream from a web camera and allows a user to change the color of the video stream. The second source of inspiration was an HTML5 Doctor articlea that gave some examples of merging canvas and video as well. Both of these articles inspired me to use Canvas to add some effects to videos.

If you’re unfamiliar with the Canvas API, or only have some basic ideas of how it works, I’d recommend seeing what the Mozilla Developer Network has to say. The best that I can do here is tell you that the Canvas API is a bitmap graphics interface for the browser. This means that we’re able to draw individual pixels into a … canvas.

The Canvas API opens up a fountain of opportunities that the development community has only begun to tap. Tahzoo is doing better than most, though. We recently used the Canvas API to detect the brightness of images in the browser.

A Bird’s-eye View

Before we get into the code, let’s discuss the high-level functions of the jQuery plugin that’s using canvas. The plugin will need to do a few things:

  1. Grab all of the attributes off of an HTML element
  2. Convert the distribution URL to one that grabs JSON data instead
  3. Get the JSON data
  4. Create a <video> and <source>
  5. Create a <canvas>
  6. When the video starts to play, send the video to the canvas
  7. Do some cool stuff with the canvas

There are some other things that are in the plugin, such as event handling and some custom event triggers, but we won’t discuss that in this article. For now, we’ll just go through the basics. But rest assured, the whole plugin added up to 358 lines — it’s pretty light as far as plugins go.

Grab some attributes

The plugin doesn’t accept input the same way that traditional jQuery plugins would.

Traditional jQuery plugins accept input via parameters passed into a method, like this:

    $('.mySelector').somePlugin({do: 'this',thenDo:'that', thisManyTimes: 2});

This is great for developers, but terrible for content authors since they seldom are acquainted with JavaScript. We want input to get passed into the plugin via something that the content author might actually have control over, such as the HTML. So, for the Media Manager plugin, it’ll accept parameters like this:

        <figure class="sdlmm" data-sdlmm-url="https://poc5.dist.sdlmedia.com/Distributions/?o=7045941B-0652-49ED-A5A9-240A91636FE5" data-sdlmm-autoplay="true" data-sdlmm-loop="true" data-sdlmm-volume="0.0"  data-sdlmm-type="json">
        </figure>

Over on the jQuery plugin side, we have to look for these data- attributes. And we’ll do that at the top of the plugin, with this:

        return this.each(function() {
            this.data = $.extend($(this).data('sdlmm'), params);
            for (var attr, i = 0, attrs = $this[0].attributes, l = attrs.length; i < l; i++) {
                attr = attrs.item(i);
                if (attr.nodeName.match('data-sdlmm-')) {
                    name = attr.nodeName.replace('data-sdlmm-', '');
                    this.data[name] = attr.nodeValue;
                }
            }
        });

First, we’ll allow the plugin to accept JavaScript parameters. Then, we’ll loop through all of the attributes on the element that start with data-sdlmm- and add those to the data that we’ve already got.

Convert the Distribution URL

The plugin blends functional and object-oriented styles of programming. So, converting the Distribution URL to something useful involves creating a ResourceUrl class, and then instantiating it as an object.

this.ResourceUrl = function() {
    var url = this.data.url;
    switch (this.data.type) {
        case 'json':
            url = this.data.url.replace('Distributions/?o=', 'json/');
            break;
        case 'embed':
            break;
        default:
            break;
    }
    return url;
};
this.resourceUrl = this.ResourceUrl();

Once we’ve converted the url, we add this data back on to this, which is our jQuery object.

Get the JSON Data

Getting the JSON data is a basic Ajax request. We first make the request, like so:

$.ajax({
    url: this.resourceUrl,
    method: "GET",
    dataType: "json"
})

We’ll use the .done() method to do some things with our data. Using .done() also decouples us from the callback:

$.ajax({
    url: this.resourceUrl,
    method: "GET",
    dataType: "json"
})
.done(function(jsonData) {
        // start doing stuff in here
});

Nothing really exciting is happening yet. But, it’s about to.

Create a video

Now that we’ve got data, we can do something with it. I won’t go into a full explanation of Media Manager’s JSON API here, but I will summarize it:

  • Assets are the things we want
  • renditionGroups contain the videos

We’ll again use an object-oriented approach to create a video for the renditionGroup that has the type web. There’s only ever one:

.done(function(jsonData) {
    var containers = jsonData.assetContainers,
        assets = containers[0].assets[0],
        renditionGroups = assets.renditionGroups,
        video;
    renditionGroups.forEach(function(group) {
        if (group.name.indexOf('Web') !== -1) {
            var video = _this.Video(group.renditions, assets.metadata.properties);
            _this.appendChild(video);
            _this.videoEl = video;

        }
    });
});

When we’re creating a video object from our Video class, we’ll send rendition information and metadata. At the time that the Video is created, we’re making the DOM node, applying any attributes, and binding events:

this.Video = function(renditions, metadata) {
    var video = document.createElement('video');

    renditions.forEach(function(resource) {
        var source = document.createElement('source');
        source.src = resource.url;
        video.appendChild(source);
    });
    _this.setVideoAttributes(video, metadata);
    _this.setVideoUI(video, metadata);
    _this.addVideoEventListeners(video);
    video.autoplay = _this.data.autoplay;
    video.loop = _this.data.loop;


    return video;
};

Create a Canvas

Now that we’ve created a video, we need to create our canvas. We’ll take the same pattern for the video, and apply it right there in the .done() of our Ajax call:

.done(function(jsonData) {
    var containers = jsonData.assetContainers,
        assets = containers[0].assets[0],
        renditionGroups = assets.renditionGroups,
        video;
    renditionGroups.forEach(function(group) {
        if (group.name.indexOf('Web') !== -1) {
            var video = _this.Video(group.renditions, assets.metadata.properties),
            canvas;
            _this.appendChild(video);
            _this.videoEl = video;

            canvas = _this.Canvas(video, assets.metadata.properties);
            _this.canvas = canvas;
            _this.ctx = _this.canvas.getContext('2d');
            
            canvas.height = video.offsetHeight;
            canvas.width = video.offsetWidth;
            _this.appendChild(canvas);
            }
        }
    });
});    

It’s not enough just to create a canvas element. In order to use the canvas, we have to create it, and then get the context. That’s why you’ll see _this.canvas.getContext('2d'). Canvas also doesn’t like having its sizes set with CSS, so we must set them explicitely on the canvas element.

Send the Video to the Canvas

So, when do we start sending the video into the canvas? That’s the juicy part, right? Remember way back, when we created the video object, that there was this method in there called _this.addVideoEventListeners(video);? Let’s take a look at that:

this.addVideoEventListeners = function (video) {
    video.addEventListener('play', _this.callbacks.vidPlay, false);
};

When we add event listeners, we’ve got one particular event that’s deceptively important, play. We only want to send the video into the canvas when the video is playing. So, when the video starts to play, we’ll draw the video frame onto the canvas.

vidPlay: function(e) {
    if (_this.ctx !== undefined) {
        _this.drawCanvas(this, 0, 0, videowidth, videoheight);

    }
}

In this function, this represents the video node itself, so that’s what we’ll send into our drawCanvas() function.

Drawing on the Canvas

Let’s explore the drawCanvas() function, as we’re getting closer to seeing this come together:

this.drawCanvas = function(src, x, y, w, h) {
    if (src.paused || src.ended) return false;
    if (src !== undefined) {
      _this.ctx.drawImage(src, x, y, w, h);
      window.requestAnimationFrame(function() {
          _this.drawCanvas(_this.videoEl, x, y, w, h);
      });
    }
};

We’re sending the video node and some positioning and sizing values into the function. We’ll first double-check that the video isn’t paused or ended. Then we’ll check and make sure it’s there. Then we’ll use the .drawImage() function, which is native to the Canvas API. This function accepts either image nodes or video nodes, and this is really where the magic begins.

After we’ve drawn a single frame of the video. But, alas, a video is more than just one frame! That’s where .requestAnimationFrame() saves the day. Once the previous frame has been drawn, this will fire — so it’s faster than setInterval().

So, we’ve grabbed a video, sent it into drawCanvas(), we’ve fired drawImage(), and we’re going to draw another one, just as soon as we get it.

Doing Cool stuff

In truth, creating a video element and sending it to canvas isn’t terribly difficult, but it’s not quite cool. Not yet.

Let’s review the steps we’ve taken thus far to get to where we are.

  1. Grab some data
  2. Make a <video> with it
  3. Send that video into a <canvas>
  4. Refresh your frame after the last one is drawn

Why discuss this? Because this helps us figure out where we can do something cool.

Color Shifting

One effect that wins cool points on the cheap is converting a video from color to grayscale. We want to do this in our drawCanvas() function, but before we run requestAnimationFrame():

this.drawCanvas = function(src, x, y, w, h) {
    if (src.paused || src.ended) return false;
    if (src !== undefined) {
      _this.ctx.drawImage(src, x, y, w, h);
      if (_this.data.colorshift && _this.data.colorshift !== 'none') {
        var pixels = _this.ctx.getImageData(0, 0, w, h),i = 0,brightness;
        for (; i >> 3) / 256;
                pixels.data[i] = ((_this.colorizing.rgb.r * brightness) + 0.1) >> 0;
                pixels.data[i + 1] = ((_this.colorizing.rgb.g * brightness) + 0.1) >> 0
                pixels.data[i + 2] = ((_this.colorizing.rgb.b * brightness) + 0.1) >> 0
            }
          _this.ctx.putImageData(pixels, 0, 0);
        }
      window.requestAnimationFrame(function() {
          _this.drawCanvas(_this.videoEl, x, y, w, h);
      });
    }
};

If that’s a little daunting, it’s ok; math frightens me, too.

If you’ll recall, I said that the Canvas API was a bitmap graphics interface. Bitmap graphics store data about each individual pixel. What we’re doing is leveraging that exact fact:

  1. Check and see if we’ve got a colorshift attribute
  2. Use the Canvas API’s native getImageData() to get an array (which is every pixel drawn in the canvas)
  3. Loop through that array, skipping to every fourth element in the array (because red, green, and blue values for each pixel are stored sequentially)
  4. Get the brightness of the given pixel
  5. Using the brightness of the pixel, do a bitwise operation to set the other two color values to a value that achieves the same brightness
  6. Put the pixels back on the canvas

Now, one thing you’ll notice is that this function depended on a property called colorizing, which isn’t listed here. That’s because we define it in our Ajax call, so that it isn’t something that’s stuck being defined and redefined in a loop:

$.ajax({
    url: this.resourceUrl,
    method: "GET",
    dataType: "json"
})
    .done(function(jsonData) {
        var containers = jsonData.assetContainers,
            assets = containers[0].assets[0],
            renditionGroups = assets.renditionGroups,
            video;
        renditionGroups.forEach(function(group) {
            if (group.name.indexOf('Web') !== -1) {
                var video = _this.Video(group.renditions, assets.metadata.properties);
                canvas;
                _this.appendChild(video);
                _this.videoEl = video;
                if (_this.data['canvas-effects']) {
                    if (_this.data['colorshift'] === 'grayscale' || _this.data['colorshift'] === 'gray') {
                        _this.colorizing.rgb.r = 255;
                        _this.colorizing.rgb.g = 255;
                        _this.colorizing.rgb.b = 255;
                    }
                    var canvas = _this.Canvas(video, assets.metadata.properties);
                    _this.canvas = canvas;
                    _this.ctx = _this.canvas.getContext('2d');
                    
                    canvas.height = video.offsetHeight;
                    canvas.width = video.offsetWidth;
                    _this.appendChild(canvas);
                }
            }
        });
    });    

It might seem odd that the colorizing.rgb object’s colors are all the same. That’s because, on screens, neutral colors are ones where the red, green, and blue values are identical. If they’re identical and all at 255, then you get white; set them all to 0 and you get black. So, if we’re supposed to colorshift the video, we set the colorizing property to white.

The Opportunities

A lot more went into the Media Manager plugin than just a content author’s ability to convert a video to black and white. That content author can zoom in a video. Not only that, the editor can insert text into the video or even use Media Manager’s Custom Events interface to insert text at specific times and with specific animations.

There’s still more that could be done. We could create time-based color fadeins, or color shift the entire video (e.g. O’ Brother Where Art Thou?). If we’re laying text on top of a video, or inside, we could do brightness detection of a specific section of video, and adjust text color appropriately.

That’s just the starting point. The big idea here is that the pixels are ours to control.

Note: This originally appeared on blog.tahzoo.com, the blog site of my employer, Tahzoo. Tahzoo migrated their blog to their primary domain. As part of that migration, the decision was made to only migrate two years worth of content which excluded this post.

I had written this post for my own blog, and Tahzoo approached me and suggested that I instead publish it for them. So what you are reading is my content, written for my own domain, and now published on it.