Some time ago I posted the news of work at Carnegie Mellon University for using your smartphone to collect data that would lead to a probable diagnoses of a Covid-19 infection. The software, on a Carnegie Mellon server, asked a number of questions, like your temperature, fatigue… and then asked you to speak certain sequences of letters and numbers using signal processing to evaluate the probability of an ongoing infection.
Now MIT researchers have published a paper reporting on their study to develop an artificial intelligence based system to detect Covid-19 infection by analysing the coughing of a person. The system was trained on tens of thousands of people and leveraged on a previous study to detect early stage Alzheimer from the analyses of facial and speech characteristics.
In the paper they report an accuracy that matches and even does better than the ones obtained from bio-chemical test. If that will be confirmed by larger people samples it could open the door to massive, and most important, continuous testing that would surely help in tracing and containment. Current results are impressive:
- correct identification of 98.5% of people with Covid-19 (1.5% of false negative)
- correct identification of 94.2% of people not infected (5.8% false positive)
- correct identification of 100% Covid-19 people in asymptomatic cases
- correct identification of 83.2% in Covid-19 free people in asymptomatic cases (16.8% false positive)
The basic assumption is that a person with Covid-19 will cough differently from a person not infected and signal processing can be able to identify patterns that in turns can be analysed through artificial intelligence, The nice thing is that the more samples are taken the more accurate the system becomes. Add to this the cost that is basically zero in a smartphone/AI testing.
A good explanation is given in the attached clip. It also explains the hurdles and the limits of this approach.
What is also notable is the use of our smartphone as a medical “testing” device. Its value is both in its ubiquitous presence and in the possibility to test several times a day. More than that. The software can learn the specificity of a person, creating a digital signature of the behaviour. At that point this digital signature is taken as the baseline against which differences can be spotted. This is the realm of personalised medicine, something that will take the upper end by the end of this decade, integrating objective specificity, like the genome, the proteome, metabolome, with observed phenotype (including the behaviour).
Additionally, the smartphone can provide that privacy that is so crucial in the healthcare sector.