Image recognition has progressed in ways that were completely unexpected just 20 years ago. At that time we kept saying that computers were no good in understanding images, that was humans turf.
This is no longer so. The camera in our phones recognise a smiling face, the photo management application on our computer can associate faces to person much better than what we can do (I have experienced several times how my iPhoto app can spot an recognise me when I was a little kid -black and white at that time, and grainy- much better than me). It is not that software has grown to emulate (and exceed) our brain, it is just using a completely different approach to image recognition and it works better than our.
As medical radiography, CAT, fMRI has moved digital the amount of digital data has ballooned and this has allowed the training of machines, using various forms of Artificial Intelligence. Automatic systems for analysing medical digital images have grown in the last 2 years becoming more and more effective.
Google has just announced the availability of LYNA -Lymph Node Assistant- based on an Open Source Image recognition software (Inception-v3) using deep learning technologies and have perfected, through training the system achieving a 99% accuracy in detecting malignancy, much better than the best doctors in the field.
I see this as just a first (amazing) step towards tomorrow healthcare where software and robots will play a major role.