March 31, 2016

Three Things Worth Knowing About Video Compression

Video is fast becoming a lynchpin in today’s communication mix and it is easy to get dogged down when trying to understand how streaming video works, given the numerous standards used for compression and the plethora of devices plus software. In recent posts the video compression contestants in the market, their solutions and patent strategies were reviewed. In this post I’ll highlight three things worth knowing about video streaming and compression that further illuminate the technology and its use.

The first fact explains why the current compression standard H.264 might enjoy a longer life than many may think.

The second fact helps us understand how to identify what type of video compression is being used.

The third fact highlights the all important role of the browser for mobile and desktop video viewing and untangles its relation to different compression standards.

Cartoon showing video compression ecosystem: record, encode, transmit, decode, playback


Squeezing Legacy H.264 Until the Pips Squeek

Upgrading from H.264 to a next-gen codec technology such as H.265 or VP9 certainly frees up bandwidth and memory requirements for transmitting or storing video, but it is a gamble for players in the ecosystem because it demands substantial investment in new equipment without prior knowledge as to which of the next-gen standards will prevail and when they might disrupt the current mainstream standard. Instead, many prefer to squeeze more life out of the popular H.264 codec. That’s possible because any video codec essentially only specifies the syntax of the (compressed) video stream, and not the method used to encode and decode it.

Thus there are a number of technology companies around who provide enhancements to existing H.264 codecs that further reduce the bit rate without any perceptible quality degradation. Companies who offer solutions here include Beamer, Faroudja, and EuclidIQ amongst others.

Understanding Containers and Codecs

To play back compressed or uncompressed videos on your PC or device, many of you will probably have used files with extensions such as .avi (Audio Video Interleave), .wmv (Windows Media Format), .flv or .swf (Adobe Flash), .mov (Apple QuickTime), .webm (Google WebM) or .mp4 (MPEG–4 Part 14).

So where does the codec fall into this scheme of things? The file extensions mentioned above are so-called container formats that allow a combination of audio, video, subtitles and still images to be held in one single file. Video or audio in such a file container may be uncompressed or compressed. Multiple video (and audio) compression standards in a single container may also be supported. It just depends on the container. Indeed, there are hundreds of container/codec combinations. But the predominant ones are the MPEG4 container that supports the H.264 video codec in combination with AAC and MP3 audio codecs and Google’s WebM container that supports its VP8 and VP9 codec in combination with the Vorbis and Opus audio codecs.

Browsers and Video Codec Support

Web pages viewed in browsers use the HyperText Markup Language (HTML) to render content. In the past it was necessary to add a browser plug-in to view video within a browser. An example is Adobe’s Flash product which is a video plug-in supported by the majority of browsers. Back in 2007 Adobe licensed the H.264 codec making video playback free for all PC and notebook users whose browsers supported adding the Flash plug-in.

Today however, HTML5 allows embedding video directly using the <video> tag, eliminating the need for third-party plugins like Flash (Adobe) or Silverlight (Microsoft). Thus video codec support now depends on the browser used and the underlying arrangements between the browser/operating system/chip decoder vendors with the codec creator.

The table below shows the state of browser codec support in late 2015.

Table showing video codec support by browser: Chrome, IE, Edge, Firefox, Safari, Opera


It’s a patchy support landscape to date and a fast-moving target too. The website Can I use is an excellent source for a quick check on the latest support because it provides up-to-date tables of supported front-end web technologies for desktop and mobile browsers.

Google’s Chrome might seem like a good choice now in terms of video support today, however industry insiders claim that Microsoft’s latest Edge browser is due to support HEVC (MPEG4) soon too.

As the codec wars wage, consumers are faced with a murky picture of what works and where in terms of online video - a consistent source of frustration and the price we pay for getting it all free.

March 3, 2016

Video Markets for Appliances, Desktop and Mobile (Part III)

Part II of “Video Rules - Codecs Engage” reviewed the next-generation video codec scenario and the potential, forward-looking success of its contestants and their solutions, namely the industry alliances MPEG LA, HEVC Advance and Alliance for Open Media (Google, Cisco, Amazon, Netflix, Intel, Microsoft and Mozilla). Here in Part III video market segmentation takes center stage, affording a clearer picture of how the ecosystem is changing and what sectors offer future rewards

Next-gen video codecs such as HEVC and VP9 further cut the bandwidth required for transmission and storage by half without perceivable loss of video quality which is why using them makes good technical and business sense. Employing them means that recorded footage needs to be encoded, stored, transmitted, and decoded on the device at the receiving end.

The many players in this video transmission ecosystem pay careful attention to the number of video consumers they will ultimately reach at the receiving end in their choice of a codec . The more, the better. The codec that best manages to permeate the ecosystem is most likely to be a winner.

Mobile video is expected to grow by an average rate of  60 % each year between 2015 and 2020 according to Cisco
The mobile video explosion
Data source: Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2015–2020

Segmenting the end-user market is helpful before placing your bets. Inspecting each of the following three market silos delivers useful data for any prediction:
  1. Appliance segment: TVs, set-top boxes (STB) for terrestial, cable and satellite broadcasts
  2. Desktop segment: PCs, notebooks
  3. Mobile segment: smartphones, tablets, media players
Pegging each segment’s size is no easy task but let’s give it a try using data available from reliable sources combined with a good dose of common sense. We’ll use the metric “bytes” to gauge size as it’s easiest for comparative purposes.

For 1 (appliances) I’ll venture to make a broad simplification to arrive at some ballpark figure: assuming that under half of the world's population (let's say 3 million TV consumers) watch one hour of TV a day on average (some watch several hours, some don’t/won't/can’t at all, whilst a few may come close to 24/7 behaviour) and assuming all video content is H.264 coded at 1280 x 720 resolution running 25 frames per second:

60 minutes/day x 150 Mbytes/minute x 3 billion people = 27 EB per day

1 exabyte (EB) is 1000 6 bytes or 1 000 000 000 000 000 000 000 bytes

(Please refer to a recent post on the average file size of 10 seconds of video)

Of course not all TV that is broadcast is encoded in H.264 as assumed in the above calculation. Mostly it’s HDTV that uses H.264 (see Wikipedia’s list of video services using H.264). For the sake of arriving at meaningful comparative data, I've deliberately used this simplification.

For the 2 (desktop) using data provided by Cisco in their February 2015 Visual Networking Index Forecast, online video that is downloaded or streamed for viewing on a PC screen is forecasted to account for approximately 23 EB of monthly Internet traffic in 2015, leading to roughly 0.766 EB of PC/notebook video per day.

For 3 (mobile) using data provided by Cisco in their February 2016 VNI Global Mobile Data Traffic Forecast, video accounted for 55 % of 3.7 EB total monthly mobile data traffic in 2015. For 3 that approximates to 0.068 EB per day for all mobile video traffic.

Here’s the corresponding table for daily video traffic in 2015:

Appliances Desktop Mobile
27 EB 0.766 EB 0.068 EB
100 % 2.8 % 0.3 %

It’s apparent that both desktop and mobile video are still dwarfed by video broadcast to appliances, but keep in mind, markets for video are in a disruptive mode, as the younger generation spend more time watching short, on-demand video clips delivered to their smartphones instead of viewing prime-time TV shows. Cisco’s data validates this behaviour in that video consumed on mobile devices is set for compound average growth rates of almost 60 % per year for the period 2015 - 2020, the fastest growth rate of all. In summary, mobile video will be gobbling up market share of broadcast and PC video at a fast pace.

Let’s look at these market segments regarding the codecs used.

Appliances: TVs, Set-Top Boxes (STB) and Other Nifty Devices

In terms of size and revenue, this is by far the most relevant of all three. You could call it the professional video segment that is in transition from broadcasting video content over-the-air (OTA) or pay-TV video-on-demand (VoD) to the newer variant of streaming professionally recorded media over-the-top (OTT) of wired or wireless data connections. Its hinges on interoperability, legacy equipment and fallback modes. H.264 is the established and well understood standard supported by most appliances. But the industry is in flux and looking to better codecs such as HEVC or VP9 for achieving lower transmission bit rates or supporting ultra-high resolution (UHD) devices.

In this segment HEVC (H.265), the follow-up standard to AVC (H.264), is used to compress some of the best and most popular studio-grade 4K streaming media currently, supported by most 4K-UHD-enabled appliances and used for professionally recorded 4K-content streamed by providers such as Netflix and Amazon Prime. Some video experts go so far as to say that HEVC/H.265 has already won the battle against Google’s VP9 simply based on the fact that AVC/H.264 has worked well for all industry participants in the past despite the licensing costs involved.

In other words, why risk turning your back on a proven business model?

Next to the cost issue (remember Google’s VPx compression is free), companies involved with creating and selling media products and services are willing to pay licensing fees

(i) if licensing is a simple procedure that also idemnifies them from possible future “submarine patent” claims as outlined in a previous blog

(ii) if the fees are reasonable in relation to generated revenues through products/services

H.264 fulfilled these requirements to a large extent: acceptable licensing terms and one company, MPEG LA, handling the complete licensing process.

H.265 appears to have muddied the pond in both respects: now it’s two parties - MPEG LA and HEVC Advance - with unclear patent lists, and hair-raising licensing costs regarding the latter’s terms that also add never-seen-before royalties on HEVC-encoded content itself.

Good reasons to consider alternatives, right?

One alternative for professional media is to continue using H.264 as long a possible. This is even attractive in terms of compression rates, as codec standards don’t specify the method used to encode and decode their compressed streams, only the syntax used. As such there are many initiatives to extend H.264’s life cycle with better compression algorithms “inside”.

The other is to wait until the competitive, open-source codec from the consortium called the Alliance for Open Media (Google, Amazon, Cisco, Microsoft, Mozilla, Netflix and Intel) becomes available some time in early 2017.

Desktop and Mobile Video Market Segments

In constrast to the appliances segment, desktop and mobile additionally support user-generated content. As mentioned previously, this segment is exploding in size as amateurs create their own videos that mostly run under a few minutes, are uploaded to public servers such as Google’s YouTube and consumed online by millions.

Desktop and mobile video users access compressed video on their devices either
  • through a native application like a “media player” for a PC or an “app” on a smartphone.
  • through their browser (or a 3rd-party plug-in for it)
Two “native application” examples are VideoLAN’s VLC player for the Windows or OSX operating systems, or YouTube’s app for smartphones running either iOS (Apple) or Android (Google) operating systems. Licensing costs for the codec are picked up by the creator of the app in most cases.

For codecs used within the browser, the browser vendor frequently pays the licensing costs. In some cases the browser relies on decoding support by the operating system (OS), thereby relegating the licensing cost to the OS vendor, or even one level deeper, to the hardware decoder - a functional block within a chip - found in the device itself which improves performance and mitigates battery drain.

If you compare the browser and operating system manufacturers with the members of the Alliance for Open Media, it isn’t hard to guess which next-gen codec is likely to take on the lead role in this particular segment.

In addition Google’s current VP9 codec is already being used for 4K video streaming on YouTube. Moreover, it’s also supported by a wide range of major TV makers like LG, Sony, Samsung, Panasonic, Toshiba, Philips and even GPU/processor makers like Intel and Nvidia. Google is continuing to bet on the principles of open, community-developed technologies and their speed of implementation to drive adoption in the hope of out-engineering the competition at some future point.

The writing seems to be on the wall on whom to use for future mobile and desktop markets. It’s once-in-a-lifetime chance to displace the H.26x incumbent a la longue in the professional segment too. Yet there might be unexpected twists down the road if essential patent holders block licensing initiatives or unexpectedly decide to change sides .

Stay in touch with mobile video market updates using wi360’s Event Calendar where you’ll find current conferences, expos, webinars and workshops that track streaming video listed under the category Multimedia.

November 27, 2015

4K Is Not UHD

In a recent post on high-resolution screens for smartphones I used the term 4K for describing the resolution of mobile screens with 3840 x 2160 pixels and was quickly and rightly reprimanded by an observant reader paying attention to detail. Indeed, I had fallen prey to marketing lingo. Strictly speaking 4K is a standard defined by industry consortium DCI (Digital Cinema Initiatives) that defines a resolution standard of 4096 x 2048 pixels used in the production and projection of movies. That’s slightly more than Ultra High Definition’s 3840 x 2160 pixels. UHD also defines double the basic resolution at 7680 x 4320 pixel too. Both share a common 16:9 aspect ratio, and 4K UHD and 8K UHD are terms used to distinguish between the two.

Marketers fromTV manufacturers often substitute 4K UHD with the more punchy 4K, opening terrain for confusion. True 4K has as slightly different aspect ratio of 256:135, which equates to 16:8.44. In other words, watching a 4K movie on UHD TV means either having narrow black bars on the top and bottom sides if you squeeze a full 4K frame onto the UHD screen, or losing small parts of the left and right edges of a frame if the 4K film is to fill the complete UHD screen. Most of us are familiar with this behaviour from previous formats. No big deal.

4K versus UHD

4K 4K UHD 8K UHD
4096 x 2048 3840 x 2160 7680 x 4320
4096 x 2048 3840 x 2160 7680 x 4320
16 : 8.44 16 : 9 16 : 9

Megapixels shots that make UHD sense


A further common fallacy in the multimedia industry is comparing pixels from cameras with those of screens.

Pixels describing a camera’s resolution are counted differently from those specifying a display’s resolution.


Counting pixels - the difference between cameras and displays


In the display world a single pixel consists of three separate RGB (red, green, blue) light sources. In other words, a single pixel on a screen can represent any given colour. Contrast this to the world of cameras, or more accurately image sensors. Each pixel in an image sensor captures one of the three RGB components that represent a colour. But it’s not as simple as dividing the megapixel count of a camera by three to arrive at the best display resolution. In short, cameras use a variation of the RGB model that takes the pecularities of the human eye into consideration and gives green twice as many detectors as red and blue to achieve better luminance resolution. The RGB pixel ratio 1:2:1 of image sensors means you need to divide the camera’s megapixel number by four when figuring out the best screen resolution to display a photo.

For example, take a 4K UHD display resolution with its count of 3840 x 2160 = 8.3 megapixels. Mutiplying that number by four means a 33.2 megapixel camera starts making sense when viewing captured shots on a 4K UHD TV.

In the past camera and smartphone vendors instigated a megapixel race to lure customers into buying their latest high-resolution devices. This marathon seemed nonsensical in a small-screen or High-Definition (HD) world. Ultimately many other parameters govern the overall quality of a snapshot, like the physical size of each “pixel” in an image sensor to name but one.

As more and more 4K UHD TVs are finding their way on to retailer’s shelves, the megapixel madness in smartphones and digital cameras of previous years begins to make sense. Favouring a smartphone with more than 16 camera megapixels is a serious consideration for hi-res aficionados on the way to their next phone replacement spree.

October 30, 2015

Video Rules - Codecs Engage (Part II)

New video codecs are mushrooming of late to address the enormous market opportunity. The focus of my last blog was on long codec development cycles and the resulting labyrinth of patents from technology providers. Let’s look at the current state of affairs in this market now.
 
To date the video codec H.264, also known as AVC (Advanced Video Codec), has dominated the industry in all market segments. It’s a standard based on research from many technology providers - large and small companies as well as academia - whose intellectual property (IP) is pooled and licensed by the MPEG LA.

Next generation video codecs such as the follow-up standard H.265 from from standard organisations ITU and ISO/IEC, also known as HEVC (High Efficiency Video Codec), or competing proprietary codecs such as Google’s VP9,
  • further cut the bandwidth by half without perceivable loss of video quality, and
  • support higher resolutions such as 4K video (refer to a recent 4K blog post here)
compared to their predecessors.

These new codecs are based on even better algorithms that run on faster processors due to the evolution of semiconductor technology over the last decade. They are creating a major buzz in industry today based on their compression efficiency, and are destined to replace their forefathers sooner or later.

Amazon, Cisco, Intel, Microsoft, Mozilla, Netflix Join Forces with Google

This year (2015) two further entities emerged on the video codec scene who will most certainly shape the future in one way or another:
  1. HEVC Advance is backed by several companies not part of the MPEG LA patent pool. The organisation may be viewed as an independent entity with a further pool of 500 patents essential to HEVC. This is a similar number to MPEG LA's pool of different essential patents. Many HEVC patents are still in the process of being granted and it is likely that several thousand will eventually comprise the full standard. As things stand right now, products employing HEVC will have to pay royalties to both MPEG LA and HEVC Advance.  Generally speaking, the more patents covered by both entities, the better, because companies planning to use HEVC in their products are then faced with less "unknowns" posed by individual patent holders not part of these two pools who might raise their head at a later time with royalty claims. So long as the total license fees remain in a reasonable bracket, the HEVC codec remains a serious contender. 
  2. Alliance for Open Media came into being in September 2015, backed by heavyweights such as Google, Cisco, Amazon, Netflix, Intel, Microsoft and Mozilla. They aim to combine their collective expertise and technologies in order to provide a future world-class, royalty-free codec. Note the word future here. It means that the designers of the Daala, Thor and VP8/VP9, from Mozilla, Cisco and Google respectively, are joining forces to create a codec that is open-source and free-for-use. Ultimately it replaces Google's prospective VP10. This approach is appealing in that it removes the fear of going with a single behemoth and its proprietary technology.
Google, Cisco, Mozilla attempt to disrupt MPEG LA


As of today, the contestant codecs HEVC and VP9 remain the same, yet the upcoming battle has become more pronounced, further exposing the market's fault lines. Google is increasing its firepower by shoring up support from other giants in the Alliance for Open Media, whereas the additional patent pool for HEVC proves that full license fees are by far not yet settled. The video codec market is sizzling as its players are scrambling to find their position.

About Codec Quality, Cost and Player Strategy

Three key factors govern the potential future success of any particular codec, namely the Quality of the codec, the Cost to use it, and the Strategy employed by its backers. How do the next-gen codecs VP9 and HEVC stack up in these three areas?

Quality: How does the video quality compare at the same bit rate and screen resolution?

Both Google’s VP9 and HEVC are on a level playing field here. Refer to detailed results in Jan Ozer's article in Streaming Media which demonstrates that VP9 is on par with HEVC/H.265 just as VP8 was with AVC/H.264.

Cost: How does the cost compare for using the codec either (i) in a hard- or software product, or (ii) for transmitting video content?

VP9 is an open-source codec and thus completely free-to-use for both cases.
For HEVC, in case (i), MPEG LA charges a royalty of $0.20 for every unit sold that exceeds the 100,000 “free” limit and is below the "all-you-can-eat" max cap of $25 million per year. Note, the latter cap is company-based, not product based. As for (ii) MPEG LA currently does not charge a license fee for HEVC-coded content. This contrasts to their policy for the predecessor H.264 (AVC). Most likely MPEG LA will revise content-transmission terms once HEVC has become widely adopted as a codec, as their licensing terms are subject to change every five years.
In addition to MPEG-LA, the newly formed HEVC Alliance surprised everyone this year by not only wanting to charge far higher unit royalties than MPEG LA but also insisting on fees for HEVC-encoded content from service providers amounting to 0.5 % cut of their attributable revenue (percentage of HEVC video they deliver). This seems to have sent shock waves through the industry, possibly leading to the formation of the Alliance for Open Media in a pre-emptive strike. It now appears that the HEVC Alliance is backpedalling on its license fee structure as a result.

Strategy: What are market players doing to ensure the future success of any codec?

This is the most fuzzy of the three because it defines what companies and organizations who either hold essential patents or are key providers of video content are doing “behind the scenes” in their attempts to monopolize markets or secure and grow their content delivery revenues. On the IP side, a simple case helps illustrate the point: many of Apple's hardware products support H.264 video en- and decode. Apple holds several patents for H.264 too as can be seen at MPEG LA. As such they are both a licensor and licensee. As a licensee it is obviously in the interests of the company to keep the cap in a region which can easily be surpassed based on all the products supporting H.264 they sell. The $25 million cap means the company needs to sell at least 100 million iPhones, iPads, Macs... . Considering that Apple will probably sell some 200 million iPhones alone in 2015, that's not only easy but also cheap for them.  On the flip side, as a patent holder Apple also benefits from the H.264 royalty stream as collected by MPEG LA, its share most likely being dependent on the attributable size of its patent portfolio to the standard. This share is often distorted by cross-licensing or bilateral agreements with other patent providers within or outside of the pool, read “I’ll make my IP available to you at no charge if you give me your IP for free". In the end, all of this may not be of particular relevance to a behemoth such as Apple, but it has huge implications for smaller IP players in the H.264 patent pool, depending on their patent portfolios or market participation with video products or content. This complex interplay of factors was probably one of the drivers for the HEVC Alliance to pop up, wishing to protect the interests of patent holders with different wish lists and priorities than those in the MPEG LA fray.

Beware of Submarine Patents

Instinct tells us that free is the best way to go. But most vendors and service providers hesitate to put their long-term codec strategy in the hands of a proprietary standard, be it from a single firm such as Google or an alliance of like-minded, complementary and dominant players such as found in the Alliance for Open Media. Furthermore, royalty-free codecs are often susceptible to so-called submarine patents. These refer to holders of patents used in royalty-free codecs who suddenly surface to assert license fees for use of their technology, creating unexpected costs. A historic case is Microsoft’s VC–1 royalty-free video codec which failed to be a codec game changer, certainly due to subsequent royalty-incurring patent claims by other tech companies.

Conversely, the collaborative effort of a greater number of assorted companies, be they small or large, complementary or competitive, who merge their efforts into a single standard, may provide more comfort during the long-term, even if there are licensing costs involved.

In wi360's next blog post, video codec market segments will be analyzed to provide a clearer picture of what’s at stake and the most-likely winners and losers in this game.

You can find upcoming events such as conferences, expos and webinars covering video, broadcasting, streaming and multimedia in wi360's Mobile & Wireless Event Guide

October 1, 2015

The Big Video Spill (Part I)

By the time we reach 2019, almost 90 % of all traffic on global networks will be of the type video.

That’s what the networking giant Cisco predicts in their ongoing visual networking index (VNI) forecast. To give some sense of scale why this is not off the mark, check out the table below which compares the file size of different media types when spending some 10 seconds previewing content on the web, as many of us do.

Media Type
File size of 10-second preview
Storage factor
Text (web page)
∼ 20 kB
1
Audio (MP3 - 192 kbps)
∼ 2 MB
100
Photo (8 MP / JPEG)
∼ 2.7 MB
135
Video (1280 x 720 - H.264 / 25 fps) 
∼ 25 MB *
1250

* Typical file size of compressed video, largely depending on the the amount of movement next to other variables such as,image resolution, frame rate, color depth etc.

Video consumes over 1000 times more bandwidth and storage space than a simple text page. Take note that the table entry shows compressed video, not the raw stream, which would equate to around 690 MB.

  Raw HDTV (720p) digital video bit rate (simplified):

     25 fps x 1280 x 720 pixels x 8 bits/pixel x 3 colors ∼ 553 Mbit/s
     10-second raw video file size ∼ 690 MB

Compressing video without observable degradation is big in many ways. It takes great algorithms on extraordinary processors to achieve results that hugely impact networks and the economics providing rich media content.

What’s more, of our five senses, vision is for most of us our predominant faculty. Researchers estimate that our sense of sight provides approximately 80% of all the information we receive about our world.

Cartoon: Video is swamping cellular and other networks

Both fixed and mobile networks are gearing up to satisfy our increasing visual appetite on consumer devices. Video codecs are a paramount element in the network ecosystem in that they cut bandwidth and storage requirements, allowing more streams to fit into a transmission channel. They essentially analyse the video at the source, strip out redundant information and compress the footage to the limit. After transmission, the compressed stream is decoded at its destination to deliver a near-perfect version of the original material.

Creating Video Codecs

What sounds simple is in fact a lengthy and intricate process with development cycles spanning years. The evolution of video codec versions (often standards) may even be measured in decades. Research and development in industry and academia have culminated in 1000’s of patents, staggered in time, each having a protection lifespan of 20 years, and collectively bringing codecs to life that compress video streams by factors of over 100.

Credit Where Credit is Due

Naturally companies and institutions that spent years engineering these feats wish to participate in the commercial success of their technology. Who wouldn’t be prepared to give a small cut of their incremental return for a sure-fire product that optimizes their resource usage by a multiple? Yet such rationale counts little in the power grab for a share in this multi-billion-dollar market.

Video Codecs for Free?

A popular way of gaining market share is to possess a great technology and then give it away for free. Consumers love this and hardly question the technology creator’s business model and how revenues flow from a free-for-use product.

Historically the video industry has relied on patent pools for creating and distributing income from proprietary or standards-based codec usage. In essence, a patent pool aggregates all contributing patents into a single portfolio in an attempt to make the licensing process simpler for the hard- or software equipment manufacturer / network operator / content service provider who wishes to use the codec. Given that there are often between 500 to 1000 patents comprising a video codec, paying royalties to single “patent pool” company removes the administrative nightmare. This has worked well in the past, in particular in the professional video and broadcast industry. It has led to a dominant video codec standard called H.264, managed by the patent pool company MPEG-LA.

Next-generation video codec providers are keen to get their foot in the door by storming the royalty-based codec fortress. They claim that licensing fees stifle innovation and the uptake of superior technology, as well as propping up the largely private party of essential patent holders contributing to the codec standard. They may not be scaring the pants off such incumbents, but their attempts could bear fruit.

Stay tuned for more on how they are engaging in this battle.

You can find upcoming events such as conferences, expos and webinars covering video, broadcasting, streaming and multimedia in wi360's Mobile & Wireless Event Guide

June 26, 2015

Five Simple Steps To Net Neutrality

Two words such as network neutrality evoke a swath of opinions and they mean different things to different people. “Treat all data equally” is the baseline slogan. None of us really want operators like wireless carriers, cable companies or Internet service provides (ISPs) who control the network to be able to throttle the speed, block certain content or cap data by type and prioritize those services which max their own revenues. We want to protect our open Internet as best as we can. Conversely, looking in the future, nobody wants life-critical data shared with equal priority with, say, entertainment - the idea of wirelessly-delivered collision avoidance data in your car competing with someone’s Netflix movie stream simply isn’t tenable.

How To Treat All Things Equal

There’s no easy way out of this dilemma as it implies bringing in governments, regulators and their bureaucrats. Many view this as a death-knell for any fast-paced technology, citing the unregulated Internet as a prime example of how well things work when regulators and lawyers stay out.
But, to be frank, either you
  • ensure a network has unlimited bandwidth
or
  • prioritise content by type (text, images, audio, video, and in particular, time-sensitive data controlling critical applications)
Scenario 1 smacks of an infrastructure utopia that defies physics given that wireless access frames our communications future and the spectrum it requires is indeed a limited resource.
Scenario 2 discriminates content by type and thus stokes the fire on net neutrality as we move forward in our commerce and entertainment future that depends on video and high-bandwidth services to satisfy consumers’ appetites.

Net Neutrality Violation In Europe

In a recent survey, market researchers Rewheel compared the pricing of mobile data plans for 4G networks in the 28 member states of the European Union in their Digital Fuel Monitor. The firm meticulously disentangled the different subscriber plans by operator and country and found that some users pay up to 100 times more per Gigabyte of data compared to more fortuitious neighbours located in other European countries.

Zero rating: How many Gigabytes Euro 35 gets you on 4G networks in Europe

Net Neutrality? Zero rating accounts for data discrimination on Europe's 4G networks

So much for network neutrality and all the talk about a digital single market! The scroundel wreaking this pricing havoc was identifed as zero rating, a practice in which mobile operators provide subscriber contracts that bundle their preferred services with unlimited usage (from over-the-top - OTT - providers under contract) yet cap the accompanying Internet data plan in the low Gigabyte range. For example, a subscriber is lured to choose an operator offering unlimited (read zero-rated) text/audio/photo/video messaging for an application such as WhatsApp and gets a measly bucket of 1 GByte for general Internet usage bundled in their plan. Substitute WhatsApp for any other messaging, social media, music streaming, file sharing or cloud storage service if you like. Analysis revealed that countries allowing zero rating accordingly have far more costly mobile Internet offerings.

Enter Legislation

With with increasing importance of the communications infrastructure to the economy of countries, it’s hard to imagine that a future Internet as delivered through fixed and mobile networks will work without government intervention. Both Europe and the USA prove this of late. Their stance is vague at best, leaves a lot open for interpretation, and ultimately stalls progress. Two recent cases point to such uncertainty.

The European Union initially included strong safeguards for net neutrality as introduced by the European Parliament (EP) in March 2014. However, the European Council (EC), who shares legislative powers with the EP, significantly diluted these in March 2015. The EC was in favour of net neutrality exemptions in the form of specialised services without exact definition of what these would encompass.

In the USA, pressure from the Obama administration triggered the Federal Communications Commission (FCC) in February 2015 to classify broadband providers as public utilities, or as common carriers. In other words, operators and their networks underly similar principles as apply to roads, railway lines, pipelines and telephone poles. This in turn provides the FCC with the power to determine whether any of an operator’s business practices constitute “unreasonable discrimination” in order to protect the consumer and customer.

Net Neutrality The Easy Way

So how do you keep future network usage cases, technology and legislation on even keel? Here’s the simple armchair view of yours truly:
  1. Install a net neutrality body at a some sensible regional level
  2. All data travelling over networks and airwaves needs to classified into specific types
  3. All providers of content are required to apply for a priority level for each data type they wish to transmit
  4. The body determines a priority for each submission and inserts it into a single, ordered list. For example, data encompassing the vital signals of critically ill patients might rank near the top whereas music streams are more likely to be found near the bottom.
  5. The body maintains and has the power to enforce this list in their region
It’s as simple as that ! Of course any company offering end-to-end proprietary networks should be exempt of such regulation. Then again it’s hard to imagine how such private, fast communication lanes will ever reach a large number of end users without ever having to traverse “public” sections of the infrastructure.

Now over to any of you who would venture implementing this…

May 21, 2015

5 Things Worth Knowing About 5G

Fifth-generation technology, mostly known as 5G, is already stumping up millions of dollars in research even though the standards framework for next-generation communication networks is to be set within the time corridor 2017 - 2019 and first commercial solutions expected to go live during 2020. Equipment vendors and component manufacturers want a part of the action when this happens because the prize is too big to ignore: being part of the 5G standard opens up a treasure chest of future licensing and royalty revenues that will handsomely pay off the investment in creating an arsenal of technology patents. Industry leaders are therefore leveraging their know-how and hammering out initial pre-standard 5G solutions in the hope of influencing standards bodies, regulators and operators. 5G now implies an extremely bold and overly ambitious vision on how future communication networks encompassing both fixed and mobile infrastructure will combine computing smarts to deliver never-seen before services in industry verticals. Currently it’s a panacea for everything that everyone ever wanted in terms of connecting people and things, and as such, a moving target. So let’s review five basic things worth knowing about 5G.


5G - a vision of next-gen communication networks

Why 5G?

First- and second-generation technology (1G / 2G) in the nineties allowed us to phone each other or send SMS messages on the go. 3G networks extended the mobility concept to data exhange for emails and Internet browsing using standards such as UMTS and HSPA. 4G brings the LTE standard into play, enabling mobile broadband applications such as video, gaming, social media and a fluid Internet experience. 5G takes this concept severals steps further with many experts claiming it to be revolutionary instead of evolutionary. The basic idea is to further improve the coverage, capacity, speed and energy efficiency of broadband communications, and also connect appliances, security sensors, health gadgets, door locks - the Internet of Things - and even cars with each other. Ultimately 25 billion networked devices are envisioned and most of these require a very low bandwidth yet long battery life.

What exactly is 5G?

Trying to pin down exactly what a standard might look like 4 years from now is a bit of a murky business based on the smattering of alternatives emanating from companies, alliances and research bodies from different regions and countries across the world. Experts largely agree that the specs for 5G should achieve:
  • an increase in capacity by factor 1000 to allow 10 000 times more traffic, necessary for connecting so many endpoints with each other
  • peak data rates of 10 Gbps with at least a reliable 100 Mbps available wherever needed, so that full movies can be downloaded in seconds
  • a decrease in latency by a factor of 100 into the low millisecond range, to guarantee that mission-critical control decisions in cars, robots and manufacturing tools be taken in time
  • a dramatic decrease energy consumption such that low-bandwidth battery-powered devices can operate for at least 10 years

Evolution to 5G - speed, latency and new RF

Next to these 10x - 100x - 1000x hard fact improvements, 5G is expected to include other measures of strategic importance to the network
  • the integration and optimisation of fixed networks (fibre) into a heterogenous 5G specification to support wireless access. New network concepts are needed to ensure the target specs for an integrated fixed/mobile hybrid infrastructure are met
  • multi-tiered network architectures will need to take data centers, cloud computing and other measures into consideration with the aim of bringing self-contained computing and communication capabilities closer to endpoints where they are needed, avoiding backhaul where possible
  • the integration of public-private spectrum sharing (read cellular vs. WiFi) into the radio access network (RAN) to overcome the spectrum scarcity in order to achieve capacity and data speed demands

5G’s most crucial challenges

Amongst the many issues 5G needs to address, two particularly thorny obstacles lie ahead:
  • Harmonization: 5G’s success and universal acceptance ultimately requires a single and unique worldwide standard. With the number of countries, players and interest groups involved, this will be no easy feat to achieve taking the scope of the undertaking and the time left for finding and committing to best-in-class technology ingredients from the many rivalling parties. It’s all about bringing the diverse viewpoints into one universally acknowledged specification in order to avoid fragmentation
  • Spectrum scarcity: achieving 5G’s high capacity, high speed and low latency target benchmarks requires freeing up or dynamically re-using scarce spectrum resources. A previous blog post addresses some of the current issues involved with the spectrum crunch. 5G further exacerbates the situation because it needs to straddle both broadband traffic at blistering speeds (in higher frequency bands) as well as low data rate IoT devices across a wide coverage area (in lower frequency bands). 5G will most likely support public safety communications as used by police, fire brigades… too and they have their own specific security and reliability demands. Add to that the differing spectrum allocation charts by country, and you end up with a herculean task. In short, to achieve 5G’s agenda will require as many chunks of spectrum from 300 MHz - 30 GHz (centimeter wave radio) and 30 GHz - 300 GHz (millimeter wave radio) as possible.

The 5G standardization process

At the heart of 5G standardization is the International Telecommunications Union (ITU) who will define the constituent parts of the new specification called IMT–2020 based on ratification of candidate technologies submitted. The ITU is organised into three divisions (ITU-T, ITU-R, ITU-D), two of which will be deeply involved with IMT–2020. ITU-T will cover the standardisation process whereas ITU-R will manage international spectrum and radio frequency recommendations and regulations. Owing to the scope of 5G, many more organizations (eg. 3GPP, IEEE, IETF…) and companies will contribute candidate technology proposals than was the case for 4G which led to the LTE and WiMAX standards. ITU-R holds so called World Radio Conferences (WRC) every three to four years where international treaties governing the use of the radio-frequency spectrum and geostationary-satellite and non-geostationary-satellite orbits are reviewed, recommended and revised. The next WRC–15 is set for November 2015 and besides agreeing on freeing up further spectrum below 6 GHz for 4G, the conference will look at spectrum topics to be addressed in terms of 5G in the forthcoming WRC–19 slated for 2019. If all goes well, the IMT–2020 standard will be set in stone on time. Karri Ranta-Aho at Nokia Networks reveals an insightful standardization timeline in a recent blog.

Pre-standard 5G research projects

In the current exploration & pre-standardization phase, the field is beginning to heat up as companies, research projects and alliances hope to leverage their candidate technologies for IMT–2020 integration. National telecom initiatives, international associations, infrastructure equipment vendors and mobile network operators are teaming with academia across the world to make the 5G vision come true. They are working furiously on network and air interface prototypes in the hope of IMT–2020 adopting their solutions. Major infrastructure vendors such as Alcatel-Lucent, Ericsson, Huawei, NEC, Nokia, ZTE and others are investing considerable research resources outside of their home markets to ensure that they can pull the standardization strings to their advantage.
5G will be a heterogeneous solution encompassing networking, computing and storage in one programmable and unified infrastructure. It will utilize multiple spectrum and radio technologies and support three different kinds of profiles:
  1. superfast broadband for video and augmented/virtual reality
  2. low energy/low data rate for IoT devices
  3. low latency for time-critical industrial, automotive and enterprise applications.
Getting anywhere close to this vision by 2020 will be quite a feat. But as they say “reach for the stars”. In the meantime, 4G’s LTE and its future derivatives still have plenty of mileage in reserve to bridge that gap.