3. Spectrum availability
Electromagnetic fields are pervasive. It took a long time to spot them. The first one to detect that something was going on was probably Ørsted, back in 1819, noticing that a current change in a wire placed nearby a compass would affect it. A few years later Faraday started to grasp what was going on but it was Maxwell (in 1861/62) who laid the theoretical foundation of electromagnetic fields (by the way, implicitly determining that all electromagnetic fields propagate at a constant speed independently of the reference frame, and that light is an electromagnetic field!). For a crash course on Maxwell’s equations take a look at the nice video clip below.
An electromagnetic field is characterised by a frequency – f – (the number of oscillation per second). An equivalent way of characterising an electromagnetic field is its wavelength – λ – (the spatial distance between two crests). Multiplying the two you get the speed of light – c -:
Since the speed of light is a constant, when you talk about the frequency of an electromagnetic field you are also talking about its wavelength. Physicists dealing with optics prefer to talk about wavelengths, those dealing with electricity prefer to talk about frequency but it is exactly the same thing. Engineers are usually talking about frequency. This is what we do when talking about wireless systems and the spectrum used (range of frequencies).
What is important to understand is that there is a continuum in the electromagnetic field frequencies. We use, as an example 50 (or 60) Hz in powering our home (if the frequency is 0 you have DC, Direct Current, a static electromagnetic field)).
We can generate an electromagnetic field of a given frequency using an oscillator, an antenna is an oscillator that propagates a field in the air. An antenna doubles back as a detector of an electromagnetic field, converting it into a (tiny) electrical current oscillating at the same frequency. An electronic circuit can amplify this “signal” and process it.
This is the first important point when using electromagnetic fields in communications: you need an electronic circuit that can on the one hand generate a field at the desired frequency and on the other hand a circuit that can “process” that frequency. The higher the frequency, the trickier the electronic circuit. That is the reason why the evolution of electronics (Moore’s law) has made possible to deal with higher and higher frequencies. The ones we are going to use in the 5G (28/75 GHz, also called mm waves) require a much more sophisticated electronics that was not available (at an affordable cost) just 10 years ago (nowadays 5G deployment is using lower frequencies, comparable to the one used by 4G). With the expected evolution in the next ten years researchers can be confident that it will be feasible to deal with higher frequency, over 100 GHz up to a THz (1,000 GHz). Hence the expectation that 6G will be able to use those higher frequencies. Since they have not been used before they are available for new applications, new spectrum for free!
Higher frequencies are good because you can pack more bits per second (see the previous post in this series), however the higher the frequency the bigger the propagation issues. Intuitively, the higher the frequency the shorter the wavelength and this means that smaller obstacles will stop the propagation. You see that in the sea waves. If a wave hits a small rock it just goes around it and keeps going on undisturbed. On the other hand if the rock is big (like an island) the waves are blocked and you have a calm sea on the other side of the obstacle. In radio-communications there is a soft spot balancing propagation with bit carrying capacity that is between the 900 MHz and 3 GHz. You go below this and you cannot carry that much data, go above that and propagation constraints force to use smaller and smaller cells (and that implies higher deployment cost).
In radio communications, as in optical communications -both are based on electromagnetic fields-, we are actually using a range of nearby frequencies, frequency bands or “spectrum”. As an example, in Italy 5G was given the 3.6-3.8 GHz band, split in two slots of 80MHz and 2 slots of 20MHz (80+80+20+20= 200 =3,800-3600). Now, if you take the big slot, 80MHz and you squeeze 6 bits per Hz you get a capacity of 480 Mbps, way lower than the Gbops the marketing is claiming. To reach that sort of capacity you need to use mm waves, i.e. use the 26-28GHz or higher frequencies where more spectrum can be allocated. The problem, as I mentioned, is that at those frequencies propagation is bad and you need to use very small cells (in the order of a hundred meters, versus km in a 4G network).
Propagation and cells size is the second crucial aspects as we increase frequency. In 6G talking about THz means plenty of capacity but incredibly small cells, measured in meters! A classical network based on cells simply would not work from an economic affordability standpoint. What you need is a paradigm change with communication taking place among “users” most of the time and only once in a while some of these users’ device will connect to the “big network”. You no longer have fixed cells but a mesh of moving cells continuously interacting with one another creating a communication fabric that will connect when needed to the communication infrastructure. I’ll address this in a future post.
What is also clear is that the management of higher frequencies and the management of multiple bands is placing very demanding requirements on the transmission points and in particular on the terminals.