Home / Blog / Getting it wrong, and knowing it

Getting it wrong, and knowing it

Google’s Sycamore quantum computer. Image credit: Rocco Ceselin/Google

Quantum computers have been around for many years now, not in the field but on headlines of newspapers and magazines. They have been and are there because they fascinate our imagination, they are not in the field because they don’t work!

Well, this is not a completely correct statement: as a matter of fact there are a number of quantum computers (like d-Wave, IBM Quantum System One, Google Sycamore…) but they aree either not exactly a quantum computer or they can be used in very very narrow fields. Even the recent claims of having reached a quantum supremacy are still referring to very specific “computations”.

The availability of a general purpose quantum computer that can replace (and be much better than) a classic computer is still in the future. In other words, if you need to compute 2+2 and be sure that the result is 4 you are better off using a classic computer.

The strength of quantum computers, the possibility of addressing a huge set of computations at the same time, is also a problem since you have to be able to pick up the correct result among many. This requires ensuring the coherence in the parallel computations and the correction of errors that are bound to occur.

Google has just shown that its quantum computer, Sycamore, can detect and fix computational errors, a crucial step in achieving large-scale quantum computation. However, Sycamore is also making more errors than the ones it solves (the error correction is part of the quantum processing and as such is subject to error… a vicious circular process).

This, anyhow, is a very important step forward since it demonstrates that a quantum computer can “know” when and where it makes an error.

The approach followed by Google researchers has been part of quantum processing theory: clustering several physical qubits (a qubit can have any -actually many- value between 0 and 1, whilst a bit can be either 0 or 1) into a logical qubits. They have shown that by adding more physical qubits the error rate at the logic qubit level drops exponentially because it becomes possible to detect errors in the physical qubits by observing the logical qubit. On the other hand, and this is the problem, the more qubits are associated to a logical qubit the more errors are encountered. It is a sort of chicken and egg problem: the more qubits I have the more chances to detect an error but at the same time there is more chances of errors occurring.

This announcement by Google is at the same time indicating that we are making progress and that we aren’t “there” yet.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.