Computer Beats a Bunch of Dermatologists at Diagnosing Melanoma
/Artificial intelligence may be a new weapon in the fight against melanoma.
Which of these two lesions is a melanoma?
Telling the difference between a benign mole and a melanoma requires significant training.
And even the best dermatologists make mistakes. But rapidly developing research in the field of computer vision may give us a new weapon in the fight against this deadly disease. The most compelling data to date, comes from this paper, appearing in the Annals of Oncology.
Let’s go through the basics of this paper before we get to the really interesting implications for the future.
Researchers curated a set of 100 dermoscopy images of suspect lesions – like the ones I showed you earlier. These were deliberately chosen to be difficult to diagnose. Of the 100 images, 20 were melanomas and 80 were benign melanocytic nevi. The pictures were sent to 58 dermatologists, who made their diagnoses.
But there was a 59th reader of the images though, an algorithm called a convolutional neural network. A convolutional neural network is a type of machine-learning algorithm particularly suited to image recognition – it’s what powers much of the optical character and facial-recognition software bouncing around Silicon Valley and, presumably, the optic systems of your standard Cylon.
In any case, applying convolutional neural networks to medical image classification is, pardon the pun, a no-brainer.
How’d the computer do?
Alan Turing would be proud.
The algorithm missed only 1 of the 20 melanomas. The average dermatologist missed 3. The neural net had a bit of an itchy trigger finger though, identifying 29 of the 80 benign lesions as melanomas compared to 23, on average, for the humans. Here’s one of the lesions it got wrong, courtesy of the lead author Dr. Holger Haenssle (figure 1).
Overall, only 13 of the 58 dermatologists had better diagnostic performance than the neural network.
Basically, the neural network detected more melanomas, at the price of a few more false-positives as you can see here (figure 2).
This is actually ideal for a screening test, which is where I think algorithms like this really shine. Convolutional neural networks are really hard to build, but once that step is done, they can be widely deployed and don’t require more computational power than what you have on your smartphone. This means that primary-care or even self-directed screening of suspicious skin lesions may become a reality soon.
But let’s not all board the hype train right away. This was only 100 images; there was limited representation of all the varieties of lesions and skin types. In a real-world setting, the algorithm may suffer.
Still, I think it’s clear that algorithms like these are the next wave of medical advancement, similar to ultrasound, CT scan, and genetic testing before them. Provided the machines don’t, you know, rise up and destroy us all, these brains in silico are coming soon to a clinic near you.