Intel unveiled on May 30th at its Computex event in Taipei its latest generation of microprocessors, the i9 family whose best in class has 18 cores and 36 threads making it the most powerful microprocess (as of today…) beating the AMD microprocessor announced in April 2017 having only 16 cores and 32 threads.
Are we still on the Moore’s path? Well, yes and no.
Over time it has become more and more tricky to pinpoint the performance of a microprocessor. It used to be easy to compare them by looking at the clock, how many MHz (the very first ones had a clock ticking in KHz…), and from time to time we had to pay attention to the number of bits (the first ones were 4 bits, now we have reached 64 bits).
In the last ten years pushing GHz forward was no longer the name of the game. We had 3GHz clocked microprocessors in the last decade and we still have microprocessors clocked in the 3GHz range today, i9 series included. The problem with increasing the GHz is that heat dissipation gets too much and would melt the chip if we were to push it much further. Most recent chips operates in the 3GHz range and a few have the possibility to boost the clock for limited time periods to 4GHz.
So the absolute speed has not improved. Engineers have found a loop around to increase the microprocessor performance: squeezing several of them in the same chip. This is called multicore. The microprocessor chip contains several basic microprocessors (cores) and can run them in parallel.
This is like saying that the Amazon river has the highest capacity (discharging as average 209,000 mc per second), although its speed is far slower than a mountain creek.
To take advantage of the parallel processing capacity you need to have software able to run multithreads and a problem that can be addressed in parallel. This is the case in several situations (like meteo forecasting, big data analyses, video rendering) but it is not in several other were there is a strict sequentiality in computation.
Given the increase in production and editing of video streams, 4k is becoming more and more the norm, the availability of several cores really boost performances, as it does in video gaming for the rendering of ever more complex images. This is likely the target of Intel, also taking into account the probable use of 4k quality in virtual reality and augmented reality in the next decade.