DermNet provides Google Translate, a free machine translation service. Note that this may not provide an exact translation in all languages

Artificial intelligence in dermatology

Authors: Manu Goyal, Research Scholar, Visual Computing Lab, Manchester Metropolitan University, Manchester, United Kingdom; A/Prof Moi Hoon Yap, Visual Computing Lab, Manchester Metropolitan University, Manchester, United Kingdom. DermNet NZ Editor in Chief: Adjunct A/Prof Amanda Oakley, Dermatologist, Hamilton, New Zealand. Copy edited by Gus Mitchell. July 2019.


What is artificial intelligence?

Artificial Intelligence (AI) employs computer systems to perform tasks, normally requiring human intelligence, such as speech recognition and visual perception. AI relies on technologies and algorithms such as robotics, machine learning, and the Internet to imitate the working of the human brain. With unlimited computational power and storage capacity, AI has the potential to outperform the human.

In medicine, computer vision algorithms have the potential to recognise abnormalities and diseases by evaluating colour, shape, and patterns [1].

Examples of AI applications include:

  • Technology to enable self-driving cars
  • Speech recognition algorithms to interact with humans, such as SIRI, Alexa, and Google Assistant
  • Language translation algorithms
  • Identification of dog breeds; one algorithm has been reported to have achieved an accuracy of more than 96%
  • Prediction of user preferences such as a list of movies or targeted advertisements
  • Prediction of periods of high demand for a taxi or a flexible workforce [2,3].

How are artificial intelligence and deep learning used in medicine?

Deep learning computer algorithms are based on convolutional neural networks. Neural networks are based on a computational model inspired by the working of the biological brain. A large number of connected nodes called artificial neurons are similar to biological neurons in the brain. These systems learn the features of an object by evaluating manually labelled data, such as ‘dog’ or ‘no dog’. The learned features can then be used to infer the nature of a new image.

Images are widely used to diagnose injury and disease and in studies of human anatomy and physiology. Advanced medical imaging techniques include magnetic resonance imaging (MRI), dual-energy X-ray absorptiometry, ultrasonography, and computed tomography [4–10].

In medical imaging, convolutional neural networks are used to determine ‘abnormal’ or ‘normal’. They train on large labelled databases of medical images and match or exceed human vision for the detection of objects in the images, such as:

  • Breast cancer
  • Brain tumour
  • Skin cancer
  • Alzheimer's disease [11,12].

These algorithms will be scalable to multiple devices, platforms, and operating systems, reducing their cost and increasing their availability for diagnosis and research. Universities, governments, and research funding agencies have recognised the opportunities to improve early diagnosis of diseases such as cancer, heart disease, diabetes, and dementia, and are investing heavily in the sector.

AI techniques approved by the US Food and Drug Administration (FDA) for clinical use by September 2018 include products to:

  • Identify signs of diabetic retinopathy in retinal images
  • Recognise signs of stroke in CT scans
  • Visualise blood flow in the heart
  • Detect skin cancer from clinical images captured using a mobile app.

How is artificial intelligence used in skin cancer diagnosis?

According to the Skin Cancer Foundation, more people are diagnosed with skin cancer each year in the US than all other cancers combined [13]. Skin cancers are commonly classified as melanoma or non-melanoma skin cancer (the keratinocytic cancers, basal cell carcinoma, and squamous cell carcinoma). Skin cancers can be difficult to distinguish from common benign skin lesions, and the appearance of melanoma is especially variable. This means that:

  • Skin cancers can be missed because they are thought to be harmless
  • Large numbers of harmless lesions are unnecessarily surgically excised so as not to miss a potentially dangerous cancer.

Dermatologists examine skin lesions by visual inspection and dermoscopy. They use their experience in pattern recognition to determine which skin lesions should be excised for diagnosis or treatment. In recent years, there has been a huge interest in using AI algorithms to aid lesion diagnosis. There are a number of datasets of skin lesions that are publicly available to aid AI research.

Researchers at the University of Stanford performed dermatologist-level classification of skin cancers with deep learning algorithms on a dataset of 129,450 clinical images, including 2,032 skin diseases [14]. They also tested the algorithm against 21 board-certified dermatologists and found the algorithm’s performance at classification on a par with their experts.

The International Skin Imaging Collaboration (ISIC) offers an extensive public dataset, which in September 2018 had 23,906 digital dermoscopic images of more than 18 types of skin lesion. Since 2016, ISIC has also conducted a yearly challenge, ‘Skin Lesion Analysis Towards Melanoma Detection’. The 2017 winner achieved more than 98% accuracy in distinguishing melanomas from benign moles [15]. They included more categories of skin lesions in the 2018 challenge, including basal cell carcinoma and actinic keratosis. We can expect improved accuracy and more categories of skin lesions to be added to the competition every year.

Machine learning algorithms for skin lesions

To create a new machine learning skin cancer algorithm, each type of skin lesion is assigned a class. At its simplest, there may be just two classes, for example, ‘benign’ and ‘malignant’, or ‘naevus’ and ‘melanoma’. More sophisticated algorithms can assess multiple classes.

Before an algorithm is tested using a new image, deep learning algorithms are trained on a large number of images in each class. The process involves three main stages (Figure 1).

Stage 1

In stage 1, the algorithm is fed with digital macroscopic or dermoscopic images labelled with the ground truth. The ground truth in this context is the lesion diagnosis, which is assigned by an expert dermatologist or is the result of histopathological examination.

Figure 1. Overview of training of different types of skin lesions with the help of deep learning

Stage 2

In stage 2, convolutional layers extract the feature map from the images. A feature map represents data with multiple levels of abstraction.

  • Initial convolutional layers extract low-level features like edges, corners, and shapes.
  • Later convolutional layers extract high-level features to detect the type of skin lesion (Figure 2).
Figure 2. Typical feature maps learned using convolutional neural networks

Stage 3

In stage 3, the feature maps are used by the machine learning classifier for pattern recognition of different classes of skin lesion. The deep learning algorithm can now be used to classify a new image (Figure 3).

Figure 3. Inference produced by deep learning algorithms on a new image of a skin lesion

ABCD criteria

The clinical ABCD criteria used by non-experts to screen pigmented skin lesions are Asymmetry, Border irregularity, Colour variation, and Diameter over 6 mm (figure 4).

A: The asymmetry property checks whether two halves of the skin lesion match (or not) in terms of colour and shape. The lesions are divided into two halves based on the long axis and again based on the short axis. Melanoma is likely to have an asymmetrical appearance.

B: The border property defines whether the edges of skin lesion are smooth and well defined or not. Skin cancers tend to have irregular borders.

C: The colour property assesses the number and variability of colours throughout the skin lesion. Melanoma and pigmented basal cell carcinoma often include shades of 3–6 colours (black, tan, dark brown, grey, blue, red and white), whereas naevi and freckles tend to have only one or two colours, which are symmetrically distributed.

D: The diameter property measures the approximate diameter of the skin lesion. The diameter of malignant skin lesions is generally greater than 6 mm (the size of a pencil eraser).

Figure 4. The ABCD rule for skin cancer

Yang et al. proposed to adopt the ABCD rule for image processing and machine learning algorithms [16]. In this work, they compared the performance of their system with doctors (general, junior, and expert) and deep learning algorithms for the diagnosis of skin lesions testing dataset (table 1). They invited two doctors from each category to perform this task. 

They trained their system on the SD-198 dataset of 6,584 clinical images from 198 different lesion categories. In this work, they extracted low-level features from three visual components: texture, colours, and borders. Yang’s Computer-Assisted Device (CAD) performed better than VGGNet and ResNet and was comparable with junior doctors. However, the expert dermatologists were significantly superior to the system.

Table 1. Performance evaluation of computerised methods and doctors on a testing set of 3,292 images

Other machine learning research on skin cancer

IBM is also working on the AI tool called Watson to analyse skin lesion images for the detection of melanoma. Their device uses six key points to analyse and determine the probability of melanoma: colour, border irregularity, asymmetry level, globule and network, similarity to skin lesion images in their database, and melanoma score; these are similar to ABCD criteria [17].

MetaOptima Technology Inc. has launched the Dermengine platform to provide a teledermatology service. Their Visual Search tool compares a user-submitted image with similar images in a database of thousands of pathology-labelled images gathered from expert dermatologists around the world. Deep learning techniques are used to search for related images based on visual features such as colour, shape, and pattern [18].

What is the future of artificial intelligence and skin cancer diagnosis?

Research involving AI is making encouraging progress in the diagnosis of skin lesions. However, AI is not going to replace medical experts shortly. In the first place, a human is needed to select the appropriate lesion for evaluation — often among hundreds of unimportant ones.

Medical diagnosis relies on taking a careful medical history and perusal of the patient’s records. It takes into account the patient's ethnicity, skin, hair and eye colour, occupation, illness, medicines, existing sun damage, the number of melanocytic naevi, and lifestyle habits (such as sun exposure, smoking, and alcohol intake). The behaviour and previous treatment of the lesion are also clues to the diagnosis.

AI can offer a second opinion and can be used to screen out an entirely benign lesion, such as a melanocytic naevus that is symmetrical in colour and structure.

These algorithms will inevitably evolve with improved accuracy in the detection of potentially malignant skin lesions, as databases expand to include more images and more patient and lesion-specific labels.

See smartphone apps to check your skin.
[Sponsored content]

 

Related information

 

References

  1. Chollet F. Building powerful image classification models using very little data. Retrieved December 2016; 13: 2016. Available from: blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html [accessed 22 July, 2019]
  2. Guizzo E. How Google’s self-driving car works. IEEE Spectrum Online 2011; 18: 1132–41. Available at: spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works [accessed 22 July, 2019]
  3. Hinton G, Deng L, Yu D, Dahl GE, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, Kingsbury B. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine 2012; 29: 82–97. Available at: ieeexplore.ieee.org/document/6296526 [accessed 23 July, 2019]
  4. Ahmad E, Goyal M, McPhee JS, Degens H, Yap MH. Semantic Segmentation of Human Thigh Quadriceps Muscle in Magnetic Resonance Images. arXiv preprint arXiv: 1801.00415. 2018. Journal
  5. Goyal M, Reeves ND, Davison AK, Rajbhandari S, Spragg J, Yap MH. Dfunet: Convolutional neural networks for diabetic foot ulcer classification. IEEE Transactions on Emerging Topics in Computational Intelligence 2018. Journal
  6. Alarifi JS, Goyal M, Davison AK, Dancey D, Khan R, Yap MH. Facial skin classification using convolutional neural networks. International Conference Image Analysis and Recognition 2017: 479–85. Springer, Cham. Available at: https://link.springer.com/chapter/10.1007/978-3-319-59876-5_53 [accessed 23 July, 2019]
  7. Goyal M, Yap MH, Reeves ND, Rajbhandari S, Spragg J. Fully convolutional networks for diabetic foot ulcer segmentation. In2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2017; 618–23). IEEE. Available at: https://ieeexplore.ieee.org/document/8122675 [accessed 23 July, 2019]
  8. Goyal M, Reeves N, Rajbhandari S, Yap MH. Robust Methods for Real-time Diabetic Foot Ulcer Detection and Localization on Mobile Devices. IEEE J Biomed Health Inform 2019; 23: 1730–41. PubMed
  9. Goyal M, Yap MH. Multi-class semantic segmentation of skin lesions via fully convolutional networks. arXiv preprint arXiv:1711.10449. 2017. Journal
  10. Yap MH, Pons G, Martí J, Ganau S, Sentís M, Zwiggelaar R, Davison AK, Martí R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J Biomed Health Inform 2018; 22: 1218–26. PubMed
  11. Evans GW. Artificial Intelligence: Where We Came From, Where We Are Now, and Where We Are Going. Available from: https://dspace.library.uvic.ca/handle/1828/8314 [accessed July 23, 2019]
  12. Weidlich V, Weidlich GA. Artificial Intelligence in Medicine and Radiation Oncology. Cureus 2018; 10: e2475. PubMed Central
  13. Cancer Facts and Figures 2018. American Cancer Society. Available at: www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2018/cancer-facts-and-figures-2018.pdf [accessed May 3, 2018].
  14. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542: 115–8. Journal
  15. Codella NC, Gutman D, Celebi ME, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). InBiomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on 2018; 168–72. Journal
  16. Yang J, Liang J, Sun X, Rosin PL, et al. Clinical skin lesion diagnosis using representations inspired by dermatologist criteria. IEEE Conference on Computer Vision and Pattern Recognition 2018; 11. Journal
  17. How Watson is learning to identify melanoma. IBM. Available at: https://www.ibm.com/cognitive/au-en/melanoma/ (accessed 7 July 2019)
  18. What visual search can do for you. MetaOptima Technology Inc. Available at: https://www.dermengine.com/en-ca/visual-search (accessed 7 July 2019).

On DermNet NZ

Other websites

Books about skin diseases