Home / Blog / 6G does not exist, yet it is already here – I

6G does not exist, yet it is already here – I

The Average Revenue per User, ARPU, in LATAM -as in any other areas, has kept declining, in spite of an increase in network use -data transfer- and an increase in network performances. Image credit: Global Market Intelligence, 2019

I had, recently, an interesting conversation with some analysts looking at the implication of 6G (yes, 6 not 5). That in itself was surprising since most of the time analysts are looking at the next quarter. Yet, they were interested on what kind of impact 6G might have on telecom operators, telecom manufacturers and on the semiconductor industry. Of course, looking that far down the lane they were also interested in understanding what type of services might require a 6G.

I started the conversation saying that 6G does not exist (and I guess most of you would agree) but then I said that it was already here, in terms of “prodrome” (yes, using this word may suggest that I see 6G as a disease…, but that is not -completely, the case). In other words, looking at the past evolution and at the present situation it may be possible to detect a few signs that can be used to make some prediction on 6G. Since this is more a crystal ball exercise than applied science, I would appreciate very much your thoughts in this matter.

I touched on the following aspects:

  • Lessons from the “G” evolution
  • Spectrum efficiency
  • Spectrum availability
  • Processing capacity in the devices
  • Power requirement
  • Network architecture
  • Services that may require/benefit from 6G

I’ll discuss them in this and in the following posts.

  1. Lessons from “G” evolution

If you look back, starting from 1G, each subsequent “G”, up to the 4th one was the result on the one hand of technology evolution and on the other of the need of Wireless Telecom Operators to meet a growing demand. Market was expanding (more users/cellphones) and more network equipment was needed. Having a new technology that could decrease the per-element cost (with respect to capacity) was a great incentive to move from one “G” to the next. Additionally, the expansion of the market resulted in an increase of revenues.

The CAPEX to pay for expanding the network (base stations and antennas sites mostly) could be recovered in a relatively short time thanks to an expanding market (not an expanding ARPU, the Average Revenue per User was actually decreasing). Additionally, the OPEX was also decreasing (again measured against capacity).

The expanding market meant more handsets sold with increasing production volumes leading to decreased price. More than that, The expanding market fuelled innovation in the handsets, with new models stimulating the top buyers to get a new one and attracting new buyers with lower cost models. All in all a virtual spiral that as increased sales increased the attractiveness of the wireless services (the me too effect).

It is in this “ensemble” that we can find the reason for the 10 years generation cycle. After ten years a new G arrives on the market. New tech is supporting it and economic reasons make the equipment manufacturers (network and device) and telecom operators ride (and push) the wave.

How is it that an exponential technology evolution does not result in an exponential acceleration of the demise of a previous G in favour of the next one? Why are the ten years basically stable?

There are a few reasons why:

  • The exponential technology evolution does not result in an exponential market adoption
  • The market perception of “novelty” is logarithmic (you need something that is 10 times more performant to perceive at 2 times better), hence the logarithmic perception combined with an exponential evolution leads to a linear adoption
  • New technology flanks existing one (we still have 2G around as 5G is starting to be deployed)

With the advent of 4G the landscape has changed. In many Countries the market has saturated, the space for expansion has dwindled and there is only replacement. Also, the coverage provided by the network has reached in most places 100% (or at least 100% of the area that is of interest to users). A new generation will necessarily cover a smaller surface expanding over time. Hence the market (that is each of us) will stick to the previous generation since it is available everywhere. This has the nasty (for the Operators) implication that the new generation is rarely so appealing to sustain a premium price.

The price of wireless services has declined everywhere in the last twenty years. The graphic shows the decline in the US over the llast ten years. Image credit: Bureau of LLabor Statistics

An Operator will need to invest money to deploy the new “G” but its revenues will not increase. Why would then an Operator do that? Well, because it has no choice. The new generation has better performance and lower OPEX. If an Operator does not deploy the new “G” someone else will, attracting customers and running the network at lower cost, thus becoming able to offer lower prices that will undercut others’ Operators’ offer.

5G is a clear example of this new situation and there is no reason to believe that 6G may be any different. Actually, the more capacity (performance) is available with a given G (and 4G provides plenty to most users in most situations) the less the market is willing to pay a premium for the new G. By 2030 5G will be fully deployed and people will get capacity and performance that will exceed their (wildest) needs.
Having a 6G providing 100 Gbps vs a 1 Gbps of the 5G  is unlikely to find a huge number of customers willing to pay a premium. What is likely to happen is that the “cost” of the new network will have to be “paid” by services, not by connectivity. This opens up a quite different scenario.

 

 

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.