Home / Blog / Post-Pandemic Scenarios – XXIV – Communication Fabric

Post-Pandemic Scenarios – XXIV – Communication Fabric

Future networks will make use of pervasive artificial intelligence, from the edge to the core and on to the vertical sectors. This will redefine both the network architecture and the players, completely changing the landscape. Image credit: Zehui Xiong et al., Artificial Intelligence-Enabled Intelligent 6G Networks

The architecture of networks used to be a top down activity, usually carried out by a Telco Operator (clearly standardisation always played a major role). The overall structure was hierarchical because hierarchy greatly simplifies the rules that each equipment has to follow (considering that those equipment in the past were electromechanical and you appreciate the importance of a hierarchical architecture).

The progressive penetration of electronics, and computers, in the network equipment and later in the control of the overall network, has led to much less hierarchical networks and added plenty of flexibility (interestingly those first steps were tagged as “intelligent network” in the middle on the 80ies).

Wireless networks came to life in the last 40 years, 30 if we look at their massive deployment, and therefore benefitted from the presence of computers in their architecture (the whole mechanism of the Home Location Register and Visitor Location Register is one of the example of the use of computers in the design of the architecture). For the ones in the field the 2G acronym, GSM, was known as Great Software Monster, underlining both the massive use of software and the suspicious attitude of engineers that were facing this software avalanche for the first time.

Even before the massive deployment of wireless networks a completely new type of network was being created, most of it as a virtual network overlaid on the telecommunications network(s): Internet.

Internet was designed as an aggregation of nets (Inter-Net) resulting in a very flat architecture where control (if one wants to call it control) was (is) completely distributed. Each “net” needs to have a sort of awareness of its “local” context and based on that hand over packets to neighbouring nets letting them to take care of forwarding those packets towards the intended destination. Is a sort of architecture that would not be possible without computers and it is an architecture that would not have been considered by Telecom Operators since it relies on “chance”  (nicely called “best effort”). Telecom Operators have always had QoS (Quality of Service) as their guiding beacon and setting up an architecture that was providing no guarantee whatsoever was just out of the question. The fact is that as technology progresses and network resources delivers more and more capacity with high reliability the “best efforts” gets pretty close to, undistinguishable from, predetermined QoS. It does not just become very close, in several situations it becomes better and achieve that through an architecture that is cheaper.

Today, telecom neworks are also based (to a good extent) on this (virtual architecture): your smartphones is using the Internet Protocol for both voice and data communications.

Third element to take into consideration is that starting with the 80ies computers started to be connected creating “computer networks” using their own communications protocols (like the token ring protocol proposed by IBM in 1984). Computer networks have evolved and over time thse three “networks”:

  • telecommunications network
  • Internet (virtual network)
  • computer network

converged into a “network of networks”, resultin from the application point of view in a single heterogeneous network.

This long preamble to point out that networks have been evolving towards flatter and flatter hierarchies, moving towards the control to the edges and most importantly enabling the networking control to applications (SDN and NFV are, to a certain extent a Telecom solution to distribute the control at the edge, although most Operators are using them from the core to make the network more flexible and less capital intensive, i.e. to save CAPEX be increasing the effectiveness of network resource use).

5G includes in its architecture several of the aspects mentioned. It is of course IP based, it offers the possibility to hand over the session control to the applications at the edges and enables the selection of network facilities from the edge (network slicing). Again, these 5G features today are controlled by the Telecom Operators that use them to offer better service and reduce its operating cost. Competition, in the second part of this decade will make these features available to the edges (and to applications) thus furthering the ongoing commoditisation of the network.

Given this trend it should come to no-one surprise that the 6G is being designed to enable:

  • full control from the edges;
  • creation of bottom up inter-networking (i.e. fostering the creation of connectivity by aggregating local networks into larger and larger clusters);
  • data based infrastructure, where data are seen as encapsulated entities belonging to data spaces and perceived in terms of their semantics;
  • increased awareness at the local level on what an application needs in terms of communication resources, considering these needs in a dynamic way. This points to an increased intelligence at local level and the capability to create a global intelligence out of these massively distributed intelligence.

Intelligence is, therefore, the keyword for 6G, as well represented in the figure stacking:

  1. Intelligent sensing layer
  2. Data mining and Analytics layer
  3. Intelligent control layer
  4. Smart application layer

Notice how intelligence is distributed in all layers and networking is achieved through an interplay among all components.

Interestingly, the network (at the edges) is being created autonomously by the presence of the entities using the network, like saying that your smartphone, or your car, a robot, a drone generate a multitude of local area networks that dynamically cluster creating a larger and larger communication fabric.

The way these entities will communicate with one another and will act as nodes in the communications fabric will need to be defined and may eventually result in an evolution of the internet protocol (IPv6) used today.

There is still quite a bit that needs to be discussed and defined but the evolution trend is clear, involving a much greater role to the edges and to the devices (hardware and software). This will create a major disruption in the telecom business.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.