November 27, 2015

4K Is Not UHD

In a recent post on high-resolution screens for smartphones I used the term 4K for describing the resolution of mobile screens with 3840 x 2160 pixels and was quickly and rightly reprimanded by an observant reader paying attention to detail. Indeed, I had fallen prey to marketing lingo. Strictly speaking 4K is a standard defined by industry consortium DCI (Digital Cinema Initiatives) that defines a resolution standard of 4096 x 2048 pixels used in the production and projection of movies. That’s slightly more than Ultra High Definition’s 3840 x 2160 pixels. UHD also defines double the basic resolution at 7680 x 4320 pixel too. Both share a common 16:9 aspect ratio, and 4K UHD and 8K UHD are terms used to distinguish between the two.

Marketers fromTV manufacturers often substitute 4K UHD with the more punchy 4K, opening terrain for confusion. True 4K has as slightly different aspect ratio of 256:135, which equates to 16:8.44. In other words, watching a 4K movie on UHD TV means either having narrow black bars on the top and bottom sides if you squeeze a full 4K frame onto the UHD screen, or losing small parts of the left and right edges of a frame if the 4K film is to fill the complete UHD screen. Most of us are familiar with this behaviour from previous formats. No big deal.

4K versus UHD

4K 4K UHD 8K UHD
4096 x 2048 3840 x 2160 7680 x 4320
4096 x 2048 3840 x 2160 7680 x 4320
16 : 8.44 16 : 9 16 : 9

Megapixels shots that make UHD sense


A further common fallacy in the multimedia industry is comparing pixels from cameras with those of screens.

Pixels describing a camera’s resolution are counted differently from those specifying a display’s resolution.


Counting pixels - the difference between cameras and displays


In the display world a single pixel consists of three separate RGB (red, green, blue) light sources. In other words, a single pixel on a screen can represent any given colour. Contrast this to the world of cameras, or more accurately image sensors. Each pixel in an image sensor captures one of the three RGB components that represent a colour. But it’s not as simple as dividing the megapixel count of a camera by three to arrive at the best display resolution. In short, cameras use a variation of the RGB model that takes the pecularities of the human eye into consideration and gives green twice as many detectors as red and blue to achieve better luminance resolution. The RGB pixel ratio 1:2:1 of image sensors means you need to divide the camera’s megapixel number by four when figuring out the best screen resolution to display a photo.

For example, take a 4K UHD display resolution with its count of 3840 x 2160 = 8.3 megapixels. Mutiplying that number by four means a 33.2 megapixel camera starts making sense when viewing captured shots on a 4K UHD TV.

In the past camera and smartphone vendors instigated a megapixel race to lure customers into buying their latest high-resolution devices. This marathon seemed nonsensical in a small-screen or High-Definition (HD) world. Ultimately many other parameters govern the overall quality of a snapshot, like the physical size of each “pixel” in an image sensor to name but one.

As more and more 4K UHD TVs are finding their way on to retailer’s shelves, the megapixel madness in smartphones and digital cameras of previous years begins to make sense. Favouring a smartphone with more than 16 camera megapixels is a serious consideration for hi-res aficionados on the way to their next phone replacement spree.

October 30, 2015

Video Rules - Codecs Engage (Part II)

New video codecs are mushrooming of late to address the enormous market opportunity. The focus of my last blog was on long codec development cycles and the resulting labyrinth of patents from technology providers. Let’s look at the current state of affairs in this market now.
 
To date the video codec H.264, also known as AVC (Advanced Video Codec), has dominated the industry in all market segments. It’s a standard based on research from many technology providers - large and small companies as well as academia - whose intellectual property (IP) is pooled and licensed by the MPEG LA.

Next generation video codecs such as the follow-up standard H.265 from from standard organisations ITU and ISO/IEC, also known as HEVC (High Efficiency Video Codec), or competing proprietary codecs such as Google’s VP9,
  • further cut the bandwidth by half without perceivable loss of video quality, and
  • support higher resolutions such as 4K video (refer to a recent 4K blog post here)
compared to their predecessors.

These new codecs are based on even better algorithms that run on faster processors due to the evolution of semiconductor technology over the last decade. They are creating a major buzz in industry today based on their compression efficiency, and are destined to replace their forefathers sooner or later.

Amazon, Cisco, Intel, Microsoft, Mozilla, Netflix Join Forces with Google

This year (2015) two further entities emerged on the video codec scene who will most certainly shape the future in one way or another:
  1. HEVC Advance is backed by several companies not part of the MPEG LA patent pool. The organisation may be viewed as an independent entity with a further pool of 500 patents essential to HEVC. This is a similar number to MPEG LA's pool of different essential patents. Many HEVC patents are still in the process of being granted and it is likely that several thousand will eventually comprise the full standard. As things stand right now, products employing HEVC will have to pay royalties to both MPEG LA and HEVC Advance.  Generally speaking, the more patents covered by both entities, the better, because companies planning to use HEVC in their products are then faced with less "unknowns" posed by individual patent holders not part of these two pools who might raise their head at a later time with royalty claims. So long as the total license fees remain in a reasonable bracket, the HEVC codec remains a serious contender. 
  2. Alliance for Open Media came into being in September 2015, backed by heavyweights such as Google, Cisco, Amazon, Netflix, Intel, Microsoft and Mozilla. They aim to combine their collective expertise and technologies in order to provide a future world-class, royalty-free codec. Note the word future here. It means that the designers of the Daala, Thor and VP8/VP9, from Mozilla, Cisco and Google respectively, are joining forces to create a codec that is open-source and free-for-use. Ultimately it replaces Google's prospective VP10. This approach is appealing in that it removes the fear of going with a single behemoth and its proprietary technology.
Google, Cisco, Mozilla attempt to disrupt MPEG LA


As of today, the contestant codecs HEVC and VP9 remain the same, yet the upcoming battle has become more pronounced, further exposing the market's fault lines. Google is increasing its firepower by shoring up support from other giants in the Alliance for Open Media, whereas the additional patent pool for HEVC proves that full license fees are by far not yet settled. The video codec market is sizzling as its players are scrambling to find their position.

About Codec Quality, Cost and Player Strategy

Three key factors govern the potential future success of any particular codec, namely the Quality of the codec, the Cost to use it, and the Strategy employed by its backers. How do the next-gen codecs VP9 and HEVC stack up in these three areas?

Quality: How does the video quality compare at the same bit rate and screen resolution?

Both Google’s VP9 and HEVC are on a level playing field here. Refer to detailed results in Jan Ozer's article in Streaming Media which demonstrates that VP9 is on par with HEVC/H.265 just as VP8 was with AVC/H.264.

Cost: How does the cost compare for using the codec either (i) in a hard- or software product, or (ii) for transmitting video content?

VP9 is an open-source codec and thus completely free-to-use for both cases.
For HEVC, in case (i), MPEG LA charges a royalty of $0.20 for every unit sold that exceeds the 100,000 “free” limit and is below the "all-you-can-eat" max cap of $25 million per year. Note, the latter cap is company-based, not product based. As for (ii) MPEG LA currently does not charge a license fee for HEVC-coded content. This contrasts to their policy for the predecessor H.264 (AVC). Most likely MPEG LA will revise content-transmission terms once HEVC has become widely adopted as a codec, as their licensing terms are subject to change every five years.
In addition to MPEG-LA, the newly formed HEVC Alliance surprised everyone this year by not only wanting to charge far higher unit royalties than MPEG LA but also insisting on fees for HEVC-encoded content from service providers amounting to 0.5 % cut of their attributable revenue (percentage of HEVC video they deliver). This seems to have sent shock waves through the industry, possibly leading to the formation of the Alliance for Open Media in a pre-emptive strike. It now appears that the HEVC Alliance is backpedalling on its license fee structure as a result.

Strategy: What are market players doing to ensure the future success of any codec?

This is the most fuzzy of the three because it defines what companies and organizations who either hold essential patents or are key providers of video content are doing “behind the scenes” in their attempts to monopolize markets or secure and grow their content delivery revenues. On the IP side, a simple case helps illustrate the point: many of Apple's hardware products support H.264 video en- and decode. Apple holds several patents for H.264 too as can be seen at MPEG LA. As such they are both a licensor and licensee. As a licensee it is obviously in the interests of the company to keep the cap in a region which can easily be surpassed based on all the products supporting H.264 they sell. The $25 million cap means the company needs to sell at least 100 million iPhones, iPads, Macs... . Considering that Apple will probably sell some 200 million iPhones alone in 2015, that's not only easy but also cheap for them.  On the flip side, as a patent holder Apple also benefits from the H.264 royalty stream as collected by MPEG LA, its share most likely being dependent on the attributable size of its patent portfolio to the standard. This share is often distorted by cross-licensing or bilateral agreements with other patent providers within or outside of the pool, read “I’ll make my IP available to you at no charge if you give me your IP for free". In the end, all of this may not be of particular relevance to a behemoth such as Apple, but it has huge implications for smaller IP players in the H.264 patent pool, depending on their patent portfolios or market participation with video products or content. This complex interplay of factors was probably one of the drivers for the HEVC Alliance to pop up, wishing to protect the interests of patent holders with different wish lists and priorities than those in the MPEG LA fray.

Beware of Submarine Patents

Instinct tells us that free is the best way to go. But most vendors and service providers hesitate to put their long-term codec strategy in the hands of a proprietary standard, be it from a single firm such as Google or an alliance of like-minded, complementary and dominant players such as found in the Alliance for Open Media. Furthermore, royalty-free codecs are often susceptible to so-called submarine patents. These refer to holders of patents used in royalty-free codecs who suddenly surface to assert license fees for use of their technology, creating unexpected costs. A historic case is Microsoft’s VC–1 royalty-free video codec which failed to be a codec game changer, certainly due to subsequent royalty-incurring patent claims by other tech companies.

Conversely, the collaborative effort of a greater number of assorted companies, be they small or large, complementary or competitive, who merge their efforts into a single standard, may provide more comfort during the long-term, even if there are licensing costs involved.

In wi360's next blog post, video codec market segments will be analyzed to provide a clearer picture of what’s at stake and the most-likely winners and losers in this game.

You can find upcoming events such as conferences, expos and webinars covering video, broadcasting, streaming and multimedia in wi360's Mobile & Wireless Event Guide

October 1, 2015

The Big Video Spill (Part I)

By the time we reach 2019, almost 90 % of all traffic on global networks will be of the type video.

That’s what the networking giant Cisco predicts in their ongoing visual networking index (VNI) forecast. To give some sense of scale why this is not off the mark, check out the table below which compares the file size of different media types when spending some 10 seconds previewing content on the web, as many of us do.

Media Type
File size of 10-second preview
Storage factor
Text (web page)
∼ 20 kB
1
Audio (MP3 - 192 kbps)
∼ 2 MB
100
Photo (8 MP / JPEG)
∼ 2.7 MB
135
Video (1280 x 720 - H.264 / 25 fps) 
∼ 25 MB *
1250

* Typical file size of compressed video, largely depending on the the amount of movement next to other variables such as,image resolution, frame rate, color depth etc.

Video consumes over 1000 times more bandwidth and storage space than a simple text page. Take note that the table entry shows compressed video, not the raw stream, which would equate to around 690 MB.

  Raw HDTV (720p) digital video bit rate (simplified):

     25 fps x 1280 x 720 pixels x 8 bits/pixel x 3 colors ∼ 553 Mbit/s
     10-second raw video file size ∼ 690 MB

Compressing video without observable degradation is big in many ways. It takes great algorithms on extraordinary processors to achieve results that hugely impact networks and the economics providing rich media content.

What’s more, of our five senses, vision is for most of us our predominant faculty. Researchers estimate that our sense of sight provides approximately 80% of all the information we receive about our world.

Cartoon: Video is swamping cellular and other networks

Both fixed and mobile networks are gearing up to satisfy our increasing visual appetite on consumer devices. Video codecs are a paramount element in the network ecosystem in that they cut bandwidth and storage requirements, allowing more streams to fit into a transmission channel. They essentially analyse the video at the source, strip out redundant information and compress the footage to the limit. After transmission, the compressed stream is decoded at its destination to deliver a near-perfect version of the original material.

Creating Video Codecs

What sounds simple is in fact a lengthy and intricate process with development cycles spanning years. The evolution of video codec versions (often standards) may even be measured in decades. Research and development in industry and academia have culminated in 1000’s of patents, staggered in time, each having a protection lifespan of 20 years, and collectively bringing codecs to life that compress video streams by factors of over 100.

Credit Where Credit is Due

Naturally companies and institutions that spent years engineering these feats wish to participate in the commercial success of their technology. Who wouldn’t be prepared to give a small cut of their incremental return for a sure-fire product that optimizes their resource usage by a multiple? Yet such rationale counts little in the power grab for a share in this multi-billion-dollar market.

Video Codecs for Free?

A popular way of gaining market share is to possess a great technology and then give it away for free. Consumers love this and hardly question the technology creator’s business model and how revenues flow from a free-for-use product.

Historically the video industry has relied on patent pools for creating and distributing income from proprietary or standards-based codec usage. In essence, a patent pool aggregates all contributing patents into a single portfolio in an attempt to make the licensing process simpler for the hard- or software equipment manufacturer / network operator / content service provider who wishes to use the codec. Given that there are often between 500 to 1000 patents comprising a video codec, paying royalties to single “patent pool” company removes the administrative nightmare. This has worked well in the past, in particular in the professional video and broadcast industry. It has led to a dominant video codec standard called H.264, managed by the patent pool company MPEG-LA.

Next-generation video codec providers are keen to get their foot in the door by storming the royalty-based codec fortress. They claim that licensing fees stifle innovation and the uptake of superior technology, as well as propping up the largely private party of essential patent holders contributing to the codec standard. They may not be scaring the pants off such incumbents, but their attempts could bear fruit.

Stay tuned for more on how they are engaging in this battle.

You can find upcoming events such as conferences, expos and webinars covering video, broadcasting, streaming and multimedia in wi360's Mobile & Wireless Event Guide

June 26, 2015

Five Simple Steps To Net Neutrality

Two words such as network neutrality evoke a swath of opinions and they mean different things to different people. “Treat all data equally” is the baseline slogan. None of us really want operators like wireless carriers, cable companies or Internet service provides (ISPs) who control the network to be able to throttle the speed, block certain content or cap data by type and prioritize those services which max their own revenues. We want to protect our open Internet as best as we can. Conversely, looking in the future, nobody wants life-critical data shared with equal priority with, say, entertainment - the idea of wirelessly-delivered collision avoidance data in your car competing with someone’s Netflix movie stream simply isn’t tenable.

How To Treat All Things Equal

There’s no easy way out of this dilemma as it implies bringing in governments, regulators and their bureaucrats. Many view this as a death-knell for any fast-paced technology, citing the unregulated Internet as a prime example of how well things work when regulators and lawyers stay out.
But, to be frank, either you
  • ensure a network has unlimited bandwidth
or
  • prioritise content by type (text, images, audio, video, and in particular, time-sensitive data controlling critical applications)
Scenario 1 smacks of an infrastructure utopia that defies physics given that wireless access frames our communications future and the spectrum it requires is indeed a limited resource.
Scenario 2 discriminates content by type and thus stokes the fire on net neutrality as we move forward in our commerce and entertainment future that depends on video and high-bandwidth services to satisfy consumers’ appetites.

Net Neutrality Violation In Europe

In a recent survey, market researchers Rewheel compared the pricing of mobile data plans for 4G networks in the 28 member states of the European Union in their Digital Fuel Monitor. The firm meticulously disentangled the different subscriber plans by operator and country and found that some users pay up to 100 times more per Gigabyte of data compared to more fortuitious neighbours located in other European countries.

Zero rating: How many Gigabytes Euro 35 gets you on 4G networks in Europe

Net Neutrality? Zero rating accounts for data discrimination on Europe's 4G networks

So much for network neutrality and all the talk about a digital single market! The scroundel wreaking this pricing havoc was identifed as zero rating, a practice in which mobile operators provide subscriber contracts that bundle their preferred services with unlimited usage (from over-the-top - OTT - providers under contract) yet cap the accompanying Internet data plan in the low Gigabyte range. For example, a subscriber is lured to choose an operator offering unlimited (read zero-rated) text/audio/photo/video messaging for an application such as WhatsApp and gets a measly bucket of 1 GByte for general Internet usage bundled in their plan. Substitute WhatsApp for any other messaging, social media, music streaming, file sharing or cloud storage service if you like. Analysis revealed that countries allowing zero rating accordingly have far more costly mobile Internet offerings.

Enter Legislation

With with increasing importance of the communications infrastructure to the economy of countries, it’s hard to imagine that a future Internet as delivered through fixed and mobile networks will work without government intervention. Both Europe and the USA prove this of late. Their stance is vague at best, leaves a lot open for interpretation, and ultimately stalls progress. Two recent cases point to such uncertainty.

The European Union initially included strong safeguards for net neutrality as introduced by the European Parliament (EP) in March 2014. However, the European Council (EC), who shares legislative powers with the EP, significantly diluted these in March 2015. The EC was in favour of net neutrality exemptions in the form of specialised services without exact definition of what these would encompass.

In the USA, pressure from the Obama administration triggered the Federal Communications Commission (FCC) in February 2015 to classify broadband providers as public utilities, or as common carriers. In other words, operators and their networks underly similar principles as apply to roads, railway lines, pipelines and telephone poles. This in turn provides the FCC with the power to determine whether any of an operator’s business practices constitute “unreasonable discrimination” in order to protect the consumer and customer.

Net Neutrality The Easy Way

So how do you keep future network usage cases, technology and legislation on even keel? Here’s the simple armchair view of yours truly:
  1. Install a net neutrality body at a some sensible regional level
  2. All data travelling over networks and airwaves needs to classified into specific types
  3. All providers of content are required to apply for a priority level for each data type they wish to transmit
  4. The body determines a priority for each submission and inserts it into a single, ordered list. For example, data encompassing the vital signals of critically ill patients might rank near the top whereas music streams are more likely to be found near the bottom.
  5. The body maintains and has the power to enforce this list in their region
It’s as simple as that ! Of course any company offering end-to-end proprietary networks should be exempt of such regulation. Then again it’s hard to imagine how such private, fast communication lanes will ever reach a large number of end users without ever having to traverse “public” sections of the infrastructure.

Now over to any of you who would venture implementing this…

May 21, 2015

5 Things Worth Knowing About 5G

Fifth-generation technology, mostly known as 5G, is already stumping up millions of dollars in research even though the standards framework for next-generation communication networks is to be set within the time corridor 2017 - 2019 and first commercial solutions expected to go live during 2020. Equipment vendors and component manufacturers want a part of the action when this happens because the prize is too big to ignore: being part of the 5G standard opens up a treasure chest of future licensing and royalty revenues that will handsomely pay off the investment in creating an arsenal of technology patents. Industry leaders are therefore leveraging their know-how and hammering out initial pre-standard 5G solutions in the hope of influencing standards bodies, regulators and operators. 5G now implies an extremely bold and overly ambitious vision on how future communication networks encompassing both fixed and mobile infrastructure will combine computing smarts to deliver never-seen before services in industry verticals. Currently it’s a panacea for everything that everyone ever wanted in terms of connecting people and things, and as such, a moving target. So let’s review five basic things worth knowing about 5G.


5G - a vision of next-gen communication networks

Why 5G?

First- and second-generation technology (1G / 2G) in the nineties allowed us to phone each other or send SMS messages on the go. 3G networks extended the mobility concept to data exhange for emails and Internet browsing using standards such as UMTS and HSPA. 4G brings the LTE standard into play, enabling mobile broadband applications such as video, gaming, social media and a fluid Internet experience. 5G takes this concept severals steps further with many experts claiming it to be revolutionary instead of evolutionary. The basic idea is to further improve the coverage, capacity, speed and energy efficiency of broadband communications, and also connect appliances, security sensors, health gadgets, door locks - the Internet of Things - and even cars with each other. Ultimately 25 billion networked devices are envisioned and most of these require a very low bandwidth yet long battery life.

What exactly is 5G?

Trying to pin down exactly what a standard might look like 4 years from now is a bit of a murky business based on the smattering of alternatives emanating from companies, alliances and research bodies from different regions and countries across the world. Experts largely agree that the specs for 5G should achieve:
  • an increase in capacity by factor 1000 to allow 10 000 times more traffic, necessary for connecting so many endpoints with each other
  • peak data rates of 10 Gbps with at least a reliable 100 Mbps available wherever needed, so that full movies can be downloaded in seconds
  • a decrease in latency by a factor of 100 into the low millisecond range, to guarantee that mission-critical control decisions in cars, robots and manufacturing tools be taken in time
  • a dramatic decrease energy consumption such that low-bandwidth battery-powered devices can operate for at least 10 years

Evolution to 5G - speed, latency and new RF

Next to these 10x - 100x - 1000x hard fact improvements, 5G is expected to include other measures of strategic importance to the network
  • the integration and optimisation of fixed networks (fibre) into a heterogenous 5G specification to support wireless access. New network concepts are needed to ensure the target specs for an integrated fixed/mobile hybrid infrastructure are met
  • multi-tiered network architectures will need to take data centers, cloud computing and other measures into consideration with the aim of bringing self-contained computing and communication capabilities closer to endpoints where they are needed, avoiding backhaul where possible
  • the integration of public-private spectrum sharing (read cellular vs. WiFi) into the radio access network (RAN) to overcome the spectrum scarcity in order to achieve capacity and data speed demands

5G’s most crucial challenges

Amongst the many issues 5G needs to address, two particularly thorny obstacles lie ahead:
  • Harmonization: 5G’s success and universal acceptance ultimately requires a single and unique worldwide standard. With the number of countries, players and interest groups involved, this will be no easy feat to achieve taking the scope of the undertaking and the time left for finding and committing to best-in-class technology ingredients from the many rivalling parties. It’s all about bringing the diverse viewpoints into one universally acknowledged specification in order to avoid fragmentation
  • Spectrum scarcity: achieving 5G’s high capacity, high speed and low latency target benchmarks requires freeing up or dynamically re-using scarce spectrum resources. A previous blog post addresses some of the current issues involved with the spectrum crunch. 5G further exacerbates the situation because it needs to straddle both broadband traffic at blistering speeds (in higher frequency bands) as well as low data rate IoT devices across a wide coverage area (in lower frequency bands). 5G will most likely support public safety communications as used by police, fire brigades… too and they have their own specific security and reliability demands. Add to that the differing spectrum allocation charts by country, and you end up with a herculean task. In short, to achieve 5G’s agenda will require as many chunks of spectrum from 300 MHz - 30 GHz (centimeter wave radio) and 30 GHz - 300 GHz (millimeter wave radio) as possible.

The 5G standardization process

At the heart of 5G standardization is the International Telecommunications Union (ITU) who will define the constituent parts of the new specification called IMT–2020 based on ratification of candidate technologies submitted. The ITU is organised into three divisions (ITU-T, ITU-R, ITU-D), two of which will be deeply involved with IMT–2020. ITU-T will cover the standardisation process whereas ITU-R will manage international spectrum and radio frequency recommendations and regulations. Owing to the scope of 5G, many more organizations (eg. 3GPP, IEEE, IETF…) and companies will contribute candidate technology proposals than was the case for 4G which led to the LTE and WiMAX standards. ITU-R holds so called World Radio Conferences (WRC) every three to four years where international treaties governing the use of the radio-frequency spectrum and geostationary-satellite and non-geostationary-satellite orbits are reviewed, recommended and revised. The next WRC–15 is set for November 2015 and besides agreeing on freeing up further spectrum below 6 GHz for 4G, the conference will look at spectrum topics to be addressed in terms of 5G in the forthcoming WRC–19 slated for 2019. If all goes well, the IMT–2020 standard will be set in stone on time. Karri Ranta-Aho at Nokia Networks reveals an insightful standardization timeline in a recent blog.

Pre-standard 5G research projects

In the current exploration & pre-standardization phase, the field is beginning to heat up as companies, research projects and alliances hope to leverage their candidate technologies for IMT–2020 integration. National telecom initiatives, international associations, infrastructure equipment vendors and mobile network operators are teaming with academia across the world to make the 5G vision come true. They are working furiously on network and air interface prototypes in the hope of IMT–2020 adopting their solutions. Major infrastructure vendors such as Alcatel-Lucent, Ericsson, Huawei, NEC, Nokia, ZTE and others are investing considerable research resources outside of their home markets to ensure that they can pull the standardization strings to their advantage.
5G will be a heterogeneous solution encompassing networking, computing and storage in one programmable and unified infrastructure. It will utilize multiple spectrum and radio technologies and support three different kinds of profiles:
  1. superfast broadband for video and augmented/virtual reality
  2. low energy/low data rate for IoT devices
  3. low latency for time-critical industrial, automotive and enterprise applications.
Getting anywhere close to this vision by 2020 will be quite a feat. But as they say “reach for the stars”. In the meantime, 4G’s LTE and its future derivatives still have plenty of mileage in reserve to bridge that gap.

April 30, 2015

Making a Case for Mobile Operators

Mobility continues to shape and restructure our private and working lives in fascinating and incredible ways. Cellular networks and short-range technologies such as WiFi are its primary enablers. On the cellular front, mobile operators and wireless carriers have shouldered the immense cost of rolling out infrastructure and licensing spectrum. But are they participating in due measure as mobile opportunities expand?

The 2015 publication by The GSM Association (GSMA) called “The Mobile Economy” is a recommended read for anyone with a stake in this industry. The report takes stock of where mobile stood in 2014 and the direction it is taking as we move towards 2020. The real jaw dropper is the $3 trillion contribution from the industry itself towards the estimated $78 trillion worldwide GDP in 2014 (according to the World Bank). That’s almost a 4 % share towards our global GDP.

Operator revenue growth to slide

Looking ahead, the mobile industry is striding forth with confidence at an average annual growth rate of roughly 4 %. That’s good news. Unfortunately not for all. Mobile network operators’ (MNO) year-on-year revenue growth is predicted to fall from around 5 % as measured in 2012 to 2 % as predicted for 2020 by the GSM Association. In other words, MNO revenue growth is eroding. You might contend a slowdown in expansion is better than contraction per se. Answering these two questions might shift your perception however:
  1. Who has invested the most to date in the mobile industry?
  2. Who has empowered most of us with the anywhere-anytime paradigm over the past two decades?


How to rollout new network infrastructure despite dwindling revenue growth?

Acquiring costly spectrum from governments and rolling out country-wide cellular networks with the latest technology is an expensive business. In 2014 MNO’s spent over $200 billion worldwide (capex) to increase coverage and capacity for us all. Shouldn’t we give credit where credit is due?

The problem with 24-month contracts

Incredible but true: the data explosion on mobile networks fueled by smartphones ought to result in higher per-bit pricing until such time that capacity is in excess, going by the perennial law of supply and demand. One would expect operators to get an ARPU (average revenue per user) boost . Instead, price wars in fierce, competitive environments spur operators on to sell low-cost, tiered plans that offer megabyte/gigabyte buckets of monthly data based on 24 month contracts. It’s difficult to predict the behaviour of customers in terms of their data consumption habits over such a long contract period. One thing is certain though, empowered by ever new mobile devices and applications, their data usage is sure to increase over time. Consistent bandwidth on cellular networks is and will thus remain a sought-after commodity. As a result, MNOs’ are under more pressure than ever to upgrade their infrastructure and find novel approaches to avoid next-gen all-IP networks from being more than just “dumb pipes” that other providers use to rake in profits with higher-margin services.

Change is the only constant

Mobile network operators in developed markets are gearing up to the future and crafting new, sustainable business models by
  • merging with other regional operators for network sharing and creating more negotiation clout when purchasing infrastructure through economies of scale
  • moving from unlimited data plans to tiered versions and finally to value-based contracts
  • making strategic acquisitions or entering partnerships to combine mobile services with fixed-line telephony, Internet access (cable/DSL), and TV in converging quad-play world
  • treating data based on traffic type to ensure Quality of Service (QoS) and ultimately customer satisfaction
  • working on deals with providers who wish to provide a consistent mobile user experience for their prized services
  • slashing handset subsidies
  • developing alliances, partnerships, joint ventures with market leaders in promising verticals such as M2M/IoT, Connected Car, Mobile Wallet, video streaming, location-based advertising and app development.
Proponents of the network neutrality who advocate the Internet’s continued success depends on a level playing field for all may not like any of this. Yet acquiring spectrum and rolling out wireless networks doesn’t come on the cheap and business models need to adapt to the changing markets.

Mapping out the future

Next to keeping their networks geared to next-generation technology, operators simultaneously need to identify the most lucrative business cases from the swath of opportunities unfolding. Conferences, forums and trade shows are ideal places to learn from peers in a complementary sector and rub shoulders with technology experts and thought leaders.


Conferences, forums, trade shows and seminars according to mobile/wireless topic over the past 2-years - Source: 2015 wi360 Event Guide

wi360 maintains a free wi360 Event Guide focused exclusively on the mobile and wireless industry which is continuously updated through the year.

March 18, 2015

Charging matters - the Samsung Galaxy S6

What does it take to capture attention at a packed trade show amidst all the hustle and bustle? In my case it was a single slide that promised a charge time of no less than ten minutes for a new smartphone due to be released. I stopped dead in my tracks whilst passing the Samsung booth at the 2015 Mobile World Congress - the specs of Samsung’s new flagship smartphone, the Galaxy S6, were being unveiled.

Battery life and charge time

Charging is a nuisance and battery life still ranks at the top of most people’s preferences when selecting a new device. The paradigm from my perspective had always been “How much time do you get from a single charge?” But passing Samsung’s booth it occurred to me that by dramatically reducing the time needed for a full charge is another way to skin the cat. The nuisance factor of charging drops and influences the acceptance of shorter “battery life”. Device selection by scrutinizing the energy metric then becomes a trade off between the time needed to fully power the phone and the ensuing endurance of its charge.

Presenting the fast charging of the Samsung Galaxy S6

At the Mobile World Congress: Samsung compares the charge
time needed for their new flagship Galaxy S6 versus the iPhone 6

At the MWC, Samsung claimed that their Galaxy S6 (due for release on 10 April 2015) will provide four hours of operation on a 10-minute charge. Oops. I initially thought 10 minutes would mean a full charge. Besides, does being operational mean making calls, browsing the web or just being idle in standby? This is where things gets confusing.

4 versus 3 hours?

Back home I quickly checked what my iPhone 6 plus gets me on a ten-minute charge from an empty battery. It’s battery status indicated a 5 % charge. It lasted for around 3 hours during which I spent roughly one hour surfing the Internet using a 802.11 ac WiFi connection. Sure, that’s comparing apples with oranges. Battery endurance should accurately measure the split of time spent on making calls, browsing the web or just being idle in standby as well as taking the wireless connection into account. But 4 versus 3 hours? Is that a quantum leap as Samsung suggests, or just another case of marketing interpreting basic physics in new ways?

Copying: the sincerest form of flattery

In its comparative marathon, Samsung claimed its new flagship phone charges in half the time of an iPhone 6. The jury is out on that one and certainly phone review websites will seize the opportunity as soon as the device is available. Keep watching this space. It’s clear that Samsung has done much with its new device to match Apple’s latest smartphone models: use of superior materials lead to a higher appreciation of quality; the battery is now built into the device and can no longer be removed; the phone no longer accepts memory cards. Some of these new features are bound to annoy Samsung aficionados who have cherished this differentiation from the iconic smartphone inventor. On the other hand, new features such as wireless charging with its support of both WMC and PMA standards means Samsung is one lap ahead of the field.

Keep your eyes peeled

Fast charging is a prized feature for power-hungry phones. I’m curious to find out what it takes in battery technology, semiconductor devices and the chargers as such to offer incremental improvements for decreasing the time it takes to “fill up a phone” as we move forward.

March 12, 2015

MWC 2015 and the Connected Car

Ready for the self-driving car? With all the hoopla surrounding this hot topic during January's CES in Las Vegas, many observers have been foiled to believe we're almost there. 

With this in mind, I traversed the exhibition halls of last week's record-breaking Mobile World Congress (MWC 2015) in search of the automotive industry's latest answer to its autonomous future. On the Connected Car front, this is what I found at the show which was frequented by over 93000 visitors and featured almost 2000 exhibitors.

Into the Conceptual Future

The Tesla always stops attendees in their tracks and NXP Semiconductors stashed a range of boards and modules feautring their automotive ICs in its trunk. More than anything, NXP paints a conceptual picture of the car's future if you connect the dots of new and existing functions that their silicon addresses.

NXP Semiconductors - Connected Car with Tesla

Automotive solutions from NXP include:

- a single-chip radar front-end transmitter for ADAS (Advanced Driver Assistance Systems) and collision avoidance

- car-to-car (C2C) and car-to-Infrastructure (C2I) communication based upon the IEEE 802.11p automotive WiFi standard

- a single-chip FM/AM/satellite radio with tuner plus further ICs that support digital radio such as DAB (+), T-DMB, 
HD radio and DRM

- in-vehicle networking using automotive-grade Ethernet PHY that complements existing technologies like CAN, FlexRay etc.

- a wireless "smart car key" using NFC hidden in an iPhone cover

Replacing mirrors with cameras and screens

Qualcomm and QNX showcase the future of automotive in a Maserati that uses cameras and displays where we usually find mirrors. The video feeds are delivered by two front cameras, one at the back, and one on the roof of the car. 

Maserati - Qualcomm - QNX - Connected Car

The development platform embedded in the car uses Qualcomm ICs built around its automotive grade Snapdragon 602A quadcore processor and the QNX operating system. The 602A integrates four Krait CPUs, one Adreno graphics engine, a Hexagon DSP, WiFi, Bluetooth, GPS, three USB ports and support for four camera connections by way of the MIPI standard. At the center of the driver's console is a large 12-inch infotainment display that reacts to touch, voice and gestures. Qualcomm's Gobi 3G/4G modem connects to the cellular network for communications and media streaming. It further serves the WiFi hotspot in the car for passenger entertainment. The center console also features a surface to wirelessly charge your cell phone based Qualcomm's WiPower solution based on the Rezense standard.

Connect to and control your e-vehicle

Porsche's Car Connect service is not a future concept but is something real and available today.

Porsche uses Cobra telematics unit with Vodafone as the operator for Car Connect

The Panamera e-hybrid vehicle is equipped with a telematics unit from Italian company Cobra, which Vodafone recently acquired. By way of a cellular "machine to machine" connection (M2M), the service features vehicle tracking, remote monitoring & assistance, as well as usage-based insurance. 

The smartphone app controls the electric charging of the Panamera and can also detect the car's location. Vodafone is the operator providing the cellular connection. The service's annual pricing ranges from € 145 to € 289. When buying a Panamera, the first year of service is free.

The best ager's dream come true

On Ford's booth visitors saw what is more akin to a connected bike than the Connected Car: an e-bike that folds and stows easily into the back of the car, with an accompanying smartphone app that finds bike-friendly roads, calculates the best route, and suggests convenient parking.

Connectivity and mobility at Ford encompass the e-bike

The MoDe:Me e-bike targets urban commuters who are wary of congested city traffic and thus decide to park on the city outskirts and use their 200-watt e-bike to travel to their final destination.

A carrier's automotive focus

From 2016 onwards, AT&T will provide all US Audi models featuring their Connect infotainment system with 4G LTE or 3G coverage allowing drivers to enjoy navigation, streaming and high-speed access to the Internet. 

Audi enables the Connected Car with AT&T

An AT&T SIM card will be pre-installed in the telematics unit for customers who purchase a new Audi in the USA next year, enabling on-board Internet access for up to 8 devices.

AT&T works together with car manufacturers in its 5,000-square-foot innovation center in Atlanta called Drive Studio to create new technologies and services for the Connected Car ecosystem.

Something simple for the automotive after-market

Visitors perusing this year's MWC showgrounds will have noticed the VIP shuttles with a banner of Huawei's simple yet neat product called CarFi on them.


That's the Connected Car in its most basic form. CarFi plugs into a 12-volt cigarette lighter socket and allows any car to become a WiFi hotspot for up to 10 devices using a LTE connection that supports up to 150 Mbit/s. The device retails for roughly € 160.

Bold visionaries see the autonomous car traveling on public roads by 2017. That's hard to imagine right now. For the moment most of us will be able to get Internet into their car in one way or another.

February 4, 2015

4K Smartphones - Stand and Deliver

In my last post I ventured to predict that 4K displays will become mainstream on high-end smartphones at some point even though they’re unlikely to deliver a better viewing experience for the mobile user on average. To become a game changer, it takes more than just adding a impressive new feature to a device to attain that immersive cinematic experience. What about 4K video content? Is enough readily available? Are the wireless networks and mobile infrastructure in place to transport high-resolution content? What are the effects of 4K displays on the smartphone itself? It’s a tall order to provide a comprehensive picture in a blog, so I’ll attempt providing some vignettes of insight in a question and answer style. We’ll shortly probe the ecosystem from production through to transfer and finally consumption of 4K content to arrive at the bigger picture.

4K phablet cartoon
"Looks pretty much like 4K to me"

Seeing the difference


Spicing up the quality of images or video is not only a case of increasing resolution. It’s technical and personal matter at the same time.

What parameters influence the perceived quality of a display?

A combination of technical factors such as the screen size, display resolution, frame rate, color depth, dynamic range, viewing distance and other parameters make up for the quality of the viewing experience. In addition, your own eyesight plays an important role, in particular its ability to distinquish fine details, known as visual acuity. The resulting mix of objective and subjective factors makes it extremely difficult to pinpoint a single metric as being paramount in providing a superior viewing experience.

What does it take to produce 4K broadcast quality content?

Producing 4K content dictates a four times higher resolution than the prevalent HD (high-definition) standard, resulting in 4x the amount of data too. That translates to very expensive equipment in the production chain (cameras, lenses, switchers, encoders, storage disks, editing workstations) so that only a trickle of material is currently shot in 4K. Over time equipment pricing will erode and options will increase. Shooting in 4K will ultimately ensure that content is future proof, spurring on its adoption.

What content is available in 4K resolution to date?

YouTube Netflix, Amazon, Sony, DirectTV, Comcast are just some of the names who boast 4K UHD movies, TV shows, or video clips, all of which have caveats of sorts. A good overview of 4K UHD programming available in the USA currently (December 2014) can be found at Digital Trends. Indeed, it’s quite a limited offering but it’s an incoming tide that’s rising.


Squeezing 4K data

Getting 4K content to the end-user requires high-bandwidth transmission paths. 4K codecs that efficiently compress the material when it is captured (encoder) and decompress it again when viewed (decoder) play a pivotal role in easing bandwidth requirements and the cost of transfer.

What are the transport requirements for 4K video?

As can be seen from the listing of offerings at Digital Trends, most 4K content is streamed over the Internet and requires a 25 Mbps channel. Many experts contend that 36 Mbps is the best channel bandwidth for delivering Ultra HD, whilst others purport that 15 Mbps is sufficient for a decent 4K experience that is also commercially viable. Whatever the case, that’s certainly a lot more compared to delivering a HD (1K) stream at 3 Mbps. It also costs approximately 5x more.



SD (720 x 576)
HD (1920 x 1080)
UHD (3840 x 2160)
Recommended
Internet Speed
3 Mbps
5 Mbps
25 Mbps




Netflix’s recommendations on Internet speed for viewing their movies and TV shows





Why are differing minimum 4K bandwidth requirements quoted by experts?

This depends on the type of content and the codec used for compressing the video footage. For example live sports events with fast moving action will require higher frame rates (more data to encode) for a fluid viewing experience. In addition the encoder used to reduce the data must compress at a faster rate due the live nature of the event. Stated in simple terms, the more time a codec has available, the more it can squeeze the data without sacrificing quality. So it stands to reason that films shot and compressed in the studio require less transport bandwidth than live sports events broadcast on the fly.

What codecs are available for 4K video compression and do they differ in the quality and bit-rate?

Of late there’s been a pitting of solutions between Google’s VP9 that follows an open source approach, and MPEG / ITU groups’ HEVC (High Efficiency Video Codec, also known as H.265). The latter’s previous H.264 (MPEG) codec has been the defacto standard for HD content in the past, but Google is hot on the heels to change this for 4K. Then there’s another initiative from Mozilla and Xiph.org called Daala that claims it will beat both on technical merit. In a future post I’ll attempt to uncover some of the key differences of these codecs in their complexity, delivered quality, achievable bit-rate reduction, available content offerings and devices that support each.


Please, no frame freezes or fallouts


Many of us know how annoying a sudden and unexpected throttling of bandwidth can be, especially when enjoying streamed video or audio. Consistent channel bandwidth is of the essence, so let’s take a look at the options.

What channels are available for transmitting 4K Ultra-HD content?

Several options are available such as satellite, microwave, cellular and fixed networks using cable or DSL. Currently the majority of 4K content is available as video-on-demand (VoD) that is stored and then streamed through fixed networks (cable/DSL) to the end user. On the broadcast TV front, satellite providers like DirectTV and BSkyB will lead the 4K race. Terrestial TV broadcasting will require more time for standards to come into place and availability will largely depend on country initiatives in transitioning to newer technology with Japan and Korea at the forefront.

Will mobile networks have sufficient bandwidth to transmit 4K content?
4G cellular networks typically support download rates of up to 150 Mbps for LTE Cat 5 smartphones that are prevalent on the market . More than enough for one UHD channel at 25 Mbps one would think. Keep in mind though, the cellular broadband network is being shared by many people at the same time. In fact we’re dealing with a top theoretical download speed that most of us won’t ever witness in the real world. Even worse, the data rate may change unexpectedly depending on network usage, and your 4K video may suddenly freeze.

Can Wi-Fi reliably stream 4K video content?

The more recent WiFi standards that employ 40 MHz channel bands (802.11n or 802.11ac) provide sufficient data throughput to support 4K video transmission in theory. In practice however, performance is unpredictable as those of us who have tried streaming HD video at home will know. This is largely due to contention between neighbouring WiFi networks whose data rates start sagging as they counter the chatter of next-door access points on the same frequencies. In addition WiFi microwaves, especially those operating in the 5 MHz band, are attenuated by walls and floors, leading to a 80 - 90 % drop in peak rate compared to the access point being in the same room. In short, uninterrupted video delivery is simply not reliable enough.

What other wireless standards can handle 4K streaming and when will they become available?

Two new standards, WiGig and WirelessHD, are out there and both operate in the 60 GHz band. They are much faster than Wi-Fi 802.11ac or LTE mobile broadband technologies. Their formidable throughput rates (7.5 Gbps for WiGig) are targeted at the wireless delivery of high-definition content. Yet the 60 GHz signals they use cannot penetrate walls. It’s all about connecting computing and entertainment devices in the same room without cables. The idea is to turn mobile devices into media stations that wirelessly dispatch streams to 4K TVs and displays. Devices supporting the new standards are expected to start shipping this year and they will allow uninterrupted wireless streaming of 4K content, albeit in the same room.


Expensive power hogs


Smartphone innovation happens at a mindboggling rate and users often wait for announced models to be released before replacement. Even if 4K is one of those desirable new features, there are other factors to observe.

What are the drawbacks of smartphones with 4K displays?

Power and cost are two inhibitors in the uptake of 4K displays on smartphones. The display is one of the major culprits in draining the battery on a mobile phone and the amount of power consumed is directly related to the size of the display. 4K also means four times more data to store and process. More memory and busier units on the SoC (system-on-chip) will sap precious juice from your phone’s battery even faster. In addition, the display is the component that costs the most in a smartphone. Paying more and having to charge your phone more often don’t speak for a runaway market success.

Can smartphones record 4K video?

The minimum requirement for a smartphone camera to capture 4K content is a camera with at least 8 megapixels (MP) of resolution. Most mid-range devices already sport such pixel magic. But can available smartphones capture and encode 4K video? Suprisingly yes, even though most don’t feature 4K displays. For example, both Apple’s iPhone 6 and 6 Plus models (8 MP camera) support 4K video recording; Samsung’s Galaxy S5 (16 MP camera) and Sony’s Xperia Z2 (20.7 MP camera) both use Qualcomm’s Snapdragon 801 that supports video capture and playback using the HEVC / H.265 codec. In fact, an impressive list of smartphones supporting HEVC can be found at phoneArena.com which are able to record 4K videos.

A nascent technology gaining momentum

What do you do if 4K content is limited, seamless transfer paths still patchy, the first smartphones with 4K displays just around corner, yet many featuring 4K capture available right now? 4K’s marketing machine has the answer and is harping that user-generated 4K content on mobiles will bridge the gap. Smartphones will be the force to bring 4K into the limelight as users record 4K footage and either watch it on 4K TVs or upload it to YouTube. A bulletproof business model or high hopes on an act of faith? Whatever the case, it is apparent that the industry is working its fingers to the bone towards a single new display standard. At some point the law of large numbers will bring down costs and herald in the age of 4K phablets, one way or another, sooner or later. The market research company ABI Research forecasts almost 500 million 4K display-enabled mobile phones to be sold in 2019.

It’s still an HD world out there and will be for quite some time, but 4K UHD is on the move.

January 19, 2015

4K on Mobile - splash, bang or thud?

The new year has just begun and the mobile industry is bracing for the splash of 4K displays in the smartphone market. Will smartphones with Ultra-High-Definition (UHD - the acronym denoting 4K resolution) become a runaway success? They promise a four times higher resolution than the current high-end of small screen technology. They also require four times the amount of data to transfer and a lot more processing muscle to handle the extra workload in tasks such as compression and image enhancement .
 
Rumours have it that Samsung’s Galaxy Note 5 (to be released mid 2015) will feature a 4K screen resolution of 2160 x 3840 pixels on a 5.9 inch display delivering an astonishing pixel density of 746 ppi (pixels per inch). What does this all mean and does it make sense, or is it just one more superlative being added to the spec list of megapixels, octacores etc. to dazzle the consumer?

10-fold increase in 4K smartphone unit shipments

Not if you believe Qualcomm, the Nr. 1 chip supplier in the mobile handset world with a market share of over 50 % for baseband modems in cell phones according to Forward Concepts . A quick look at 4K support in their Snapdragon 805 system-on-chip reveals an unshakeable commitment and formidable investment in this latest display technology.

Smartphone Display Trends 2012 - 2017

Smart Phone Display TrendSource: IHS DisplaySearch, 2015
IHS DisplaySearch expect a 10-fold increase in annual shipments of 4K displays on smartphones by 2017. A modest 6 million units are forecast to ship this year (2015).

Why the 4K smartphone market differs from its TV counterpart

Most of us have witnessed how TV formats battled in moving from PAL/NTSC to high-definition HDTV and the time it took for content and broadcasting in this format to become common. Seers from the mobile world expect the change to 4K to be a completely different kettle of fish. Uptake of this superior technology will be swift . The premise is that mobile devices such as smartphones don’t need to wait for 4K content to become available on a large scale as they are already equipped with cameras and codecs that generate 4K video which can be shared with like-minded. The breakthrough will be further fueled by the fact that consumers buy new mobiles every other year. That’s a very different replacement cycle to PCs or TVs, driving down the cost of high resolution handsets at a much quicker rate. 

Your best viewing distance

 The proposition of enjoying an immersive cinematic experience, viewing an image in fine detail, or simply reading crisp text are undeniably attractive. Does 4K on a handheld deliver on this promise and improve the user experience compared to lower resolution displays? Image quality is an extremely subjective experience and its perception depends on a multitude of factors, not the least of which being the condition of your eyesight. Objectively speaking however, a higher resolution always makes perfect sense the closer you get to the screen or the bigger the screen gets. At some point your eyesight will notice the edges of rasterized text or displayed objects. But how close is close and how big is big? Smartphones are getting bigger by the day - sometimes that small hand can barely hold newer models. In addition, the phone, or shall we call it phablet (phone + tablet) , is at most an arm’s length away from your eyes, usually viewed less than elbow’s length away or some 40 cm / 16 inches. Experts from the display market rely on a simple but reliable rule of thumb for the “best” viewing distance based on screen resolution and size for 16:9 aspect ratios:
  • for UHD (2160 x 3840) it’s 1.5 x height of the display
  • for Full HD (1080 x 1920) it’s 3 x height of the display
Even the best-sighted of us will no longer be able to distinguish differences to the next lower-resolution display beyond this measure.

So, how does this translate to the latest phablets?

Screen Diagonal
Screen Size
Resolution
In
pixels
Pixels-per-Inch (PPI)
Best Viewing Distance (landscape)
SAMSUNG
Galaxy Note 5 **
15 cm
5.9 "
13.1. x 7.4 cm
5.2 x 2.9 "
3840 x 2160
UHD
746
11 cm
4.3 "
SAMSUNG
Galaxy Note 4
14.5 cm
5.7 "
12.6 x 7.1 cm
5.0 x 2.8 "
2560 x 1450
WQHD
515
16 cm
6.3 "
Apple
iPhone 6 Plus
14 cm
5.5 "
12.2 x 6.9 cm
4.8 x 2.7 "
1920 x 1080
Full HD
401
21 cm
8.3 "
 Apple
iPhone 6
12 cm
4.7 "
10.4 x 5.9 cm
4.1 x 2.3 "
1334 x 750

326
31 cm
12.2 "

       ** estimated release is mid 2015

As the table suggests, a cinema quality movie on a 4K smartphone will deliver a better user experience if viewed from a distance of less than 11 cm / 4.3  inches (landscape mode). That’s really close in front of the eyes. Of course, zooming ever further into a hi-res photo by pinching the touchscreen remains an attractive side benefit of 4K content on 4K displays. The “best viewing distance” matches Qualcomm’s affirmation on the average person being able to immediately appreciate the superior quality of a 344 ppi display if viewed closer than 10 inches (25 cm). However note, 344 ppi is not even Full HD resolution.

Notwithstanding, the mobile future is 4K

In the mid nineties when browsing the Internet and Search were at their genesis, I remember contending with technology peers that the future of the medium would remain text-based, more than enough for the emerging knowledge-based, networked society at the time. My rationale was that pictures, let alone videos, would never break through because of bandwidth and cost considerations. How wrong could I have been! As the saying goes “a picture says more than 1000 words”. And if a picture actually sells more than 1000 words, the dynamics of retail and commerce will drive adoption for certain. So, with this in mind and unencumbered by conventional wisdom, 4K on smartphones are destined to break in and break through, even if their added value may not be apparent to the mobile user at first glance.