by Pablo Schklowsky on 2011-02-22 14:50
Doesn't Flash already support hardware acceleration for H.264?
In general, the answer to this question is "yes," if you're using Flash version 10.1. Your mileage may vary, depending on system hardware capabilities and Flash's support for various hardware. Here's how standard Flash hardware acceleration works:
This method is vastly more efficient than the non-hardware-accelerated alternative, where Flash performs the H.264 decoding in software.
If Flash already has hardware acceleration, why do we need a new method?
In general, enabling hardware acceleration for H.264 decreases the load on the CPU. However, because Flash is still rendering video once per frame and compositing all of the Flash graphics with the video, there is still significant overhead involved. This manifests itself more clearly with larger videos (1080p and above) and slower CPUs.
Adobe's solution is to remove the video from Flash's rendering pipeline entirely. Enter Stage Video.
Here's how it works:
Because Flash no longer has to make any calculations to render the video, this takes even more load off of the CPU.
Are there any limitations to Stage Video?
Yes, there are a couple of limitations, and for some applications, they can be a deal-breaker. Luckily, for most web video player applications, such as the JW Player, these limitations aren't as big of a deal:
When will the JW Player support Stage Video?
We haven't nailed down a timeline for Stage Video support, but it's on the short list of future player improvements. We've already created a player with experimental support for Stage Video. You can download it here, and enable Stage Video acceleration by setting the provider option to stage. For now, only standard progressive download mode is supported (no HTTP or RTMP streaming).
To benchmark the new player, open up your Task Manager (on Windows) or Activity Monitor (on Mac OS X) and take a look at Flash's CPU utilization while watching an HD video. Try your video with provider set to stage, (Stage Video on), then switch provider to video to see how the player performs without Stage Video.
As always, we'd love to hear your feedback in the comments below.
by Jeroen Wijering on 2011-02-15 11:36
Last week, the W3C held its Second Web & TV Workshop in Berlin. The workshop focused on the convergence of web technology and broadcasting. In other words, how will web and television work together to eventually merge?
Along with sessions on second-screen scenarios and accessibility, the workshops covered adaptive streaming and content protection. Both sessions were very compelling considering that streaming and protection are two important limitations of today's HTML5 video support.
Adaptive Streaming: DASH
Adaptive streaming is a technology that enables high-quality video streaming from any regular web server. Each adaptive stream is stored in multiple quality levels. Video players continuously request small fragments from these files (e.g. through range-requests) and seamlessly glue them together into one presentation. This technology is especially well suited for mobile (3G, 4G, WiFi) video delivery because video players can quickly adapt to changing bandwidth conditions by loading fragments from another quality level.
In the workshop's adaptive streaming session, the main focus was on MPEG DASH, a just-released specification from the Motion Pictures Experts Group. DASH aims to standardize streaming of video over HTTP, since today's solutions from Apple, Adobe and Microsoft are 95% the same but 100% incompatible.
In a nutshell, DASH specifies the format of the XML file that lists the available quality levels (the manifest). The spec provides guidance around encoding video for adaptive streaming (mostly for MP4) and around stitching the fragments together in a video player. See this paper and these slides from MPEG DASH co-author Thomas Stockhammer for more information.
Moving forward, two outstanding issues must be resolved in order to make DASH a widely used standard. First, DASH should be cleared of patent claims (if there are any), so it can be used in free software (e.g. for WebM). Second, specifications should be written for connecting DASH to HTML5, so adaptive streams can load in a video tag. For example, simply through the @src attribute.
Content Protection: PIFF
Content protection is another hot topic for online video. Currently, HTML5 video provides no content protection mechanisms, while closed systems like Flash and Silverlight do. Again, there is no standard for this, forcing companies like Netflix to encode their content multiple times for various DRM systems. And since DRM (by nature) is hard to specify in an open format, standardization seems far away.
There is progress though, in the form of PIFF (Protected Interoperable File Format). PIFF is based upon the MP4 file format and specifies what encryption should be used and how it should be applied. The beauty of this format is that it solely focuses on a common encryption mechanism, while leaving the rights management part untouched. This allows content owners to encrypt and store their videos once, even when using multiple DRM systems. See this paper by John Simmons for a more in-depth explanation.
The separation of encryption and rights management also opens the door for scenarios in which only encryption (no rights management) is used. Such scenarios should appeal to both publishers (for basic protection or privacy reasons) and open browsers like Firefox and Chrome. Next step in this area is the investigation of a common decryption workflow, in either HTML5 or DASH.
More News Soon
In summary, the Second Web & TV Workshop was exciting and productive, but there is still work ahead. Recognizing this, the W3C started a Web+TV Interest Group, which will continue work on such things as investigating the legal state of MPEG DASH. Stay tuned for more updates down the road...
by Jeroen Wijering on 2011-02-09 12:00
Publishing a few on-demand videos can be cheap and simple: just upload the videos to your site and use a tool like the JW Player to embed them on your site. Historically, publishing a live stream has been challenging and a lot more expensive. Most publishers use dedicated upload and streaming software for live streams, which can cost hundreds or even thousands of dollars and often require high cost server hardware. However, there are some cheap alternatives. This blog post explores a combination of tools that will allow you to get a live stream up and running for just a buck!
First the server. There are various streaming servers out there: some have a license fee (Flash Media Server, Wowza Media Server) and some are free (IIS Media Services, Flumotion). For each of them, you’ll need to buy, install and run a webserver. This is a lot of work and costs a lot of money.
Enter Amazon EC2. It's a service from Amazon that allows you to rent webservers by the hour (you also pay per GB transferred, but that’s just a few cents). The pre-built EC2 offerings include webservers that run Wowza Media Server 2.0. This means you can boot a webserver with Wowza, stream a live event, and terminate the webserver shortly afterwards. No monthly contracts, no server management.
Setting up a Wowza server takes about an hour the first time around, since you have to sign up for EC2 and 'Wowza for EC2' before you’re able to configure your server instance (which can be done using the ElasticFox Firefox plugin). When that's done, booting your server for a live event takes a few clicks. See the Getting Started section at the Wowza Media Server for EC2 page and make sure to follow all steps.
Note you won’t have to access the webserver itself. Instead, the default Wowza installation boots up ready to broadcast a live stream – you’ll just need to connect to it. Also, be sure that you open TCP port 80 (HTTP) and 1935 (RTMP) to any IP (0.0.0.0/0) when configuring the "security group" permissions. Finally, make sure you terminate a server after you’ve finished your live event - the meter keeps ticking regardless of whether or not you’re using the box.
Boot a server instance and wait until ElasticFox says it is "Running". You're now ready to start the stream!
On to the tools for uploading the live stream. There are the expensive tools (Inlet, Telestream), there are the hard to use tools (VLC, FFmpeg), and then there’s Flash Media Live Encoder from Adobe. It is free and fairly easy to use. It is intended for use with the Flash Media Server, but works equally fine with the Wowza Media Server. Download the tool (available for Windows & Mac) to get started.
Once inside the tool, you’ll need to select a video capture device. This can be a built-in webcam, but you can also use a professional camera connected to your computer using USB or Firewire. On Windows, you can even stream your desktop after installing the VH Screen Capture driver. We recommend using the following settings:
Press the green "Start" button and your stream is up and running. You're now broadcasting!
The last step is embedding a player on your site where viewers can watch your live event. The JW Player is an excellent (free for noncommercial use) option. Download the JW Player and upload the "jwplayer.js" and "player.swf" files to your website. You can now use the following code template to embed the live stream into your page:
Make sure to update all the options in this code with your configuration. This includes:
Navigate to the web page where you’ve inserted the embed code and click on the player to watch the stream. You're now live!
This tutorial has given you some hands-on tips for setting up a live stream in an easy and affordable way. Let us know if you run into issues, and we high recommend following the above instructions (especially the server setup and configuration) to the T!
Once you have your server up and running, it’s pretty simple to start adding other features, including:
Also, feel free to post a comment below if you'd like to see any of these (or others) explored in a future blog post.
by Jeroen Wijering on 2011-02-03 11:27
The Google Chrome team recently announced it would drop support for the H.264 video codec. Dropping H264 is beneficial for Google in several ways: it may help Google's WebM format gain additional traction in the market and solidifies Google's stance as a supporter of open media formats in the WebM versus H264 debate, as most of Google's other properties (including YouTube) still support H264.
Shortly after the announcement, a truckload of blog posts popped up, explaining the impact this would have on the adoption of WebM over H264. A couple interesting reads:
In spite of all the comments about this announcement, most commentators seem to gloss over its practical irrelevance. There's a short, simple reason for this.
Suppose Internet Explorer 9 ships tomorrow and in the middle of the night, the IE team abandons H264 and ships the browser with WebM instead. Next, suppose every single Internet Explorer installation out there is instantly updated to v9, making WebM support widespread.
Nothing would change. Why? Because all video watched on the desktop is played through Flash, and Flash isn't going away any time soon.
Publishers currently cannot move from Flash to HTML5, because HTML5 lacks vital technologies like adaptive streaming (for long-form / live content), content protection (for premium content) and playback locking (for advertising). On top of that, today's entire online video ecosystem (ingestion, transcoding, advertising, analytics, viral sharing, etc.) is Flash based. Both obstacles will be overcome in time, but this will be a slow process of incremental technological advances.
To force a transition, some bloggers have suggested Chrome should entirely drop support for Flash. This definitely won't happen. Flash is absolutely vital to the web. In addition to video, there's applications like advertising (a $25B industry) and gaming (Farmville!) that fully depend on it. Any browser dropping Flash would instantly get dropped by both publishers and users in turn.
In summary, desktop browsers are stuck with Flash, and publishers will simply continue to use Flash. As the migration to HTML5 starts to happen, publishers will leverage Flash in browsers that do not support their video format of choice (be it H264 or WebM). Video platforms like Bits on the Run or Brightcove and video players like the JW Player facilitate such functionalities today.
As it pertains to WebM/H264, desktop browsers will not move the needle either way. But something else will.
Devices (phones, tablets, settops) do not have a history of supporting Flash and many will choose not to (as Apple has done for iOS). On devices where Flash is supported, CPU limitations will make it impossible to play video using software-based decoding. This means that even Flash will be limited by whatever video codecs the devices support in hardware. In other words: Flash cannot be used as a fallback for unsupported codecs as it is today for desktop browsers.
The choices device vendors (hardware + software) make will have the greatest impact the adoption of WebM. Publishers will be forced to choose between publishing their videos in whichever formats are natively supported on the most popular devices, or choose not to support certain platforms. While it's still possible to distribute your content without worrying too much about the discrepancy between the platforms, the incredible growth of the phone, tablet and settop market will soon take that option off of the table.
That said, the odds are against WebM for now. H264 is available on nearly any phone, tablet and settop out there and WebM isn't available on any device. Only after the launch of WebM hardware decoding can we expect to see announcements that can influence the uptake of WebM versus H264. Who will support WebM decoding? How good will it be (performance, streaming, protection) compared to H264? And who (besides Google) will dare dropping H264 decoding support?
Only after the various device vendors have picked their side (and users have picked their devices!), can we re-evaluate. Until then, announcements like the one made by the Chrome team will only have symbolic value.