SciELO - Scientific Electronic Library Online

 
vol.30 issue1Special Issue on Artificial IntelligenceAssessment of Image-Texture Improvement Applied to Unmanned Aerial Vehicle Imagery for the Identification of Biotic Stress in Espeletia. Case Study: Moorlands of Chingaza (Colombia) author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Ciencia e Ingeniería Neogranadina

Print version ISSN 0124-8170On-line version ISSN 1909-7735

Cienc. Ing. Neogranad. vol.30 no.1 Bogotá Jan./June 2020  Epub Aug 16, 2020

https://doi.org/10.18359/rcin.4242 

Artículos

A Systematic Review of Deep Learning Methods Applied to Ocular Images

Una revisión sistemática de métodos de aprendizaje profundo aplicados a imágenes oculares

Oscar Julián Perdomo Charrya 

Fabio Augusto Gonzálezb 

a Universidad del Rosario. E-mail: oscarj.perdomo@urosario.edu.co ORCID: https://orcid.org/0000-0001-9493-2324

b Universidad Nacional de Colombia. E-mail: fagonzalezo@unal.edu.co ORCID: https://orcid.org/0000-0001-9009-7288


Abstract:

Artificial intelligence is having an important effect on different areas of medicine, and ophthalmology is not the exception. In particular, deep learning methods have been applied successfully to the detection of clinical signs and the classification of ocular diseases. This represents a great potential to increase the number of people correctly diagnosed. In ophthalmology, deep learning methods have primarily been applied to eye fundus images and optical coherence tomography. On the one hand, these methods have achieved outstanding performance in the detection of ocular diseases such as diabetic retinopathy, glaucoma, diabetic macular degeneration, and age-related macular degeneration. On the other hand, several worldwide challenges have shared big eye imaging datasets with the segmentation of part of the eyes, clinical signs, and ocular diagnoses performed by experts. In addition, these methods are breaking the stigma of black-box models, with the delivery of interpretable clinical information. This review provides an overview of the state-of-the-art deep learning methods used in ophthalmic images, databases, and potential challenges for ocular diagnosis.

Keywords: Clinical signs; ocular diseases; ocular dataset; deep learning; clinical diagnosis

Resumen:

La inteligencia artificial tiene un importante impacto en diversas áreas de la medicina, y la oftalmología no ha sido la excepción. En particular, los métodos de aprendizaje profundo han sido aplicados con éxito en la detección de signos clínicos y la clasificación de enfermedades oculares. Esto representa un impacto potencial en el incremento de pacientes correcta y oportunamente diagnosticados. En oftalmología, los métodos de aprendizaje profundo se han aplicado principalmente en imágenes de fondo de ojo y tomografia de coherencia óptica. Por un lado, estos métodos han logrado un rendimiento sobresaliente en la detección de enfermedades oculares tales como la retinopatía diabética, el glaucoma, la degeneración macular diabética y la degeneración macular relacionada con la edad. Por otro lado, varios desafíos mundiales han compartido grandes conjuntos de datos con segmentación de parte de los ojos, signos clínicos y el diagnóstico ocular realizado por expertos. Adicionalmente, estos métodos han venido rompiendo el estigma de los modelos de caja negra, proveyendo información clínica interpretable. Esta revisión proporciona una visión general de los métodos de aprendizaje profundo de última generación utilizados en imágenes oftálmicas, bases de datos y posibles desafíos para los diagnósticos oculares.

Palabras clave: hallazgos clínicos; enfermedades oculares; bases de datos oculares, aprendizaje profundo; diagnóstico clínico

1. Introduction

The diagnosis of an ophthalmologic disease is done through different kinds of clinical exams. Exams may be non-invasive such as slit-lamp exam, visual acuity, eye fundus image (EFI), ultrasound, optical coherence tomography (OCT); or invasive exams as fluorescein angiography [1]. Non-invasive clinical exams are easier to take, have no contraindications and do not affect the eye's natural response to external factors in comparison to invasive exams. Therefore, EFI and OCT exams are high-patient compliant, quick and simple techniques, with the main advantages that images can be easily saved to be analyzed at a later time, and the prognosis, diagnosis and follow-up of diseases can be monitored over time.

Automatic analysis of EFIs and OCTs as a tool to support medical diagnosis has become an engineering challenge in terms of achieving the best performance, the lowest computational cost and lowest runtime among the different algorithms [2-6]. Thus, the choice of the best method to represent, analyze and make a diagnosis using ocular images is a complex computational problem [7-11]. On the other hand, deep learning techniques have been applied with some success to several eye conditions using as evidence individual sources of information [12-14].

Some researchers have studied how to support the diagnosis with different methodologies. Vandarkuhali and Ravichandran [2] detected the retinal blood vessels with an extreme learning machine approach and probabilistic neural networks, Gurudath et al. [12] worked with machine learning identification from fundus images with a three-layered artificial neural network and a support vector machine to classify retinal images, and Priyadarshini et al. studied clustering and classifications with data mining to give some useful prediction applied to diabetic retinopathy diagnosis [3]. Despite good results, the main problem with these works is that datasets are small and the need for labels is expensive and cumbersome work.

Deep learning (DL) offers some advantages such as the processing of lots of images with the use of graphic processing units (GPU) and tensor processing units (TPU); and the ability to automatically learn data representation from raw data. Tanks to these features, DL has been able to outperform traditional methods in several computer vision and image analysis tasks. Tis success has motivated its application to medical image analysis including, of course, ophthalmology images.

Tis article focuses on the review and analysis of deep learning methods applied to ocular images for the diagnosis of diabetic retinopathy (DR), glaucoma, diabetic macular edema (DME) and age-related macular degeneration (AMD). Tese diseases are related with diabetes as one of the four major types of chronic noncommunicable disease and they are the leading cause of blindness worldwide in productive age (20-69 years), with the main problem that 25 % of diabetics worldwide will have visual problems along diabetes, and without a preventive diagnosis and treatment promptly, these subjects will suffer irreversible blindness [15-20].

Te paper is organized as follows: Section 2 contains an overview of ocular diseases medical background with their corresponding information sources. Section 3 summarizes free public-available ocular datasets. Section 4 summarizes the most common performance metrics used by deep learning methods. In addition, Section 5 reports on the main deep learning methods for each source of medical information. Finally, Section 6 discusses the main results, limitations, and future works.

2. Medical background

2.1 Ocular diseases

2.1.1 Diabetic retinopathy

Diabetic retinopathy is caused by a diabetes side-effect which reduces blood supply to the retina, including lesions appearing on the retinal surface [21]. DR-related lesions can be categorized into i) red lesions such as microaneurysms and hemorrhages, and ii) bright lesions such as exudates and cotton-wool spots [22], as shown in Figure 1.

Source: Taken from [24].

Fig. 1 [Left] A color eye fundus image showing multiple microaneurysms, intraretinal hemorrhages, and exudation affecting the fovea in a patient with severe non-proliferative diabetic retinopathy with severe diabetic macular edema, and [Right] A b-scan OCT showing vitreomacular traction affecting the foveal depression. 

2.1.2 Diabetic macular edema

The diabetes macular edema is a complication of DR that occurs when the vessels of the central part of the retina (macula) are affected by the accumulation of fluid and exudate formation in different parts of the eye [25], as depicted in Figure 2.

Source: Taken from [26] and [27].

Fig. 2 [Left] A color eye fundus image showing multiple dot and flame hemorrhages, cotton wool spots and macular exudation in a patient with severe non-proliferative diabetic retinopathy with diabetic macular edema, and [Right] A b-scan OCT showing multiple intraretinal hyperreflective dots and pseudo-cystic spaces in the middle retinal layers in a patient with diabetic macular edema. 

2.1.3 Glaucoma

The glaucoma is related to the progressive degeneration of optic nerve fibers and structural changes of the optic nerve head [21]. Although glaucoma cannot be cured, its progression can be slowed down through treatment. Therefore, the timely diagnosis of this disease is vital to avoid blindness [28-29]. Glaucoma diagnosis detection is based on manual assessment of the Optic Disc (OD) through ophthalmoscopy, looking morphological parameters for the central bright zone called the optic cup and a peripheral region called the neuro-retinal rim [30], as reported in Figure 3.

Source: Taken from [31] and [32].

Fig. 3 [Left] An optic disc color image showing an absence of the neural ring with a total excavation in a patient with advanced glaucoma, and [Right] A b-scan OCT showing a thinning in the nerve fiber layer in a patient with Glaucoma.  

2.1.4 Age-related macular degeneration

The age-related macular degeneration (AMD) causes vision loss at the central region and distortion at the peripheral region [21]. The main symptom and clinical indicator of dry AMD are drusen. The major symptom of wet AMD is the presence of exudates [33], as presented in Figure 4.

Source: Taken from [34] and [35].

Fig. 4 [Left] A color eye fundus image showing multiple flame hemorrhages, cotton wool spots and macular exudation, and [Right] A b-scan OCT showing the presence of soft drusen in the EPR-choriocapillaris complex in a patient with Age-related Macular Degeneration. 

2.2 Medical information sources

There are different types of clinical exams for the diagnosis of ocular disease. Some researchers documented eye digital signal and image processing techniques such as electrooculogram (EOC) [36], electroretinogram (ERG) [37-38], visual evoked potentials [39-42], dynamic pupillometry [43-44], among other methods [45].

The two non-invasive techniques widely used by the ophthalmologist to diagnose the ocular condition are EFIs and OCT. On the one hand, the eye fundus is represented as a 2D image of the eye that allows checking faster and easily parts of the eyes (i.e. optic disc, blood vessels, and others), but also some retinal abnormalities (i.e. microaneurysms, exudates, among others). On the other hand, the OCT uses near-infrared light based on low coherence interferometry principles to record the set of retinal layers. The OCT depicts the information in a 3D volume with a resolution of a cross-sectional area with a defined number of scans as shown in Figure 5. In the two cases, the diagnosis performed by experts depends crucially on the clinical findings located during the exam.

Source: Taken from [46].

Fig. 5 EFI and OCT volume containing cross-sectional b-scans from a healthy subject. 

3. Ocular image datasets

In recent years, the detection of clinical signs and the grading of ocular diseases have been considered engineering challenging tasks. In addition, worldwide researchers have published their methods and a set of EFIs and OCTs databases with different ocular conditions, population, acquisition devices and image resolution. The available ocular datasets for each ocular disease, the type of ocular image and the study population are presented in Table 1.

Table 1 A summary of free public ocular datasets with ocular diseases graded by experts, dataset names and da-taset descriptions. 

Ocular disease Dataset Dataset description
DR [47] 40 eye fundus images with a resolution of images are 768 x 584 pixels. The dataset contains 7 images graded by experts as mild DR and 33 images as normal.
[48] 130 eye fundus images with 110 DR and 20 normal images. The images labeled as DR contain the segmentation of clinical signs: hard exudates, soft exudates, microaneurysms, hemorrhages, and neovascularization.
[49] 89 eye fundus images where 84 images have mil DR and 5 images labeled as normal.
[50] 100 digital color fundus images with microaneurysms in all the images. This dataset was randomly split into training and test datasets with 50 images.
[51] 28 eye fundus images with two blood vessel segments performed by experts.
[52] Two subsets: a set of 47 eye fundus images with the segmentation of exudates and 35 images without lesions labeled as normal. The second set has 148 images with microaneurysms and 233 images labeled as normal.
[53] Two subsets: the training set has 35126 and the test set has 53576. The images were labeled as normal, mild, moderate, severe and proliferative DR.
[54] 13000 images with normal, mild, moderate, severe and proliferative DR.
DR, Glaucoma [55] 49 eye fundus images with the optic head segmentation and the grading of DR and glaucoma.
[56] 45 eye fundus images with 15 healthy, 15 DR and 15 glaucomatous subjects. The images have the detection and segmentation of clinical signs provided by experts.
DR, DME [26] 1200 eye fundus images with DR and DME labels performed by an expert.
[23] 516 images with a resolution of 4288x2848 pixels with the grading of DME and DR performed by experts.
DR, AMD [57-58] 400 eye fundus images and 400 black and white masks with blood vessel annotations.
[59-60] 143 color fundus images with a resolution of 768x576 pixels. The images were grading as 23 AMD, 59 DR, and 61 normal images.
[61] 500 OCTs with normal, macula hole, AMD, central serous retinopathy and DR.
Glaucoma [62] 110 color fundus images with optic nerve head segmentation. The images were labeled as 26 glaucomatous and 84 with eye hypertension.
[63] 650 eye fundus images with the classification of glaucoma condition.
[64] 40 color images with the blood vessels, optic disc, and arterio-venous reference.
[31] 783 images with glaucomatous, suspicious of glaucoma and normal conditions.
[65] 258 eye fundus images with 144 normal and 114 glaucomatous subjects.
[66-67] 101 images with optic disc and optic cup segmentation and glaucoma condition.
[68] 760 retinal fundus images with glaucoma labels.
[69] 1200 eye fundus images with optic disc and cup segmentation with normal and glaucoma conditions.
[32] 1110 scans where 263 were diagnosed as healthy and 847 with primary open-angle glaucoma (POAG).
AMD [70] 206500 eye fundus images with AMD and non-AMD conditions.
[34] 1200 eye fundus images with early AMD and non-AMD conditions.
[35] 385 OCTs with 269 AMD and 115 normal subjects. Each OCT volume has 100 B-scan with a resolution of 512x1000 pixels.
[71] 15 OCT volumes with the retinal layer segmentation performed by an expert. The database was labeled with AMD condition.
DME [72] 169 eye fundus images with mild, moderate and severe DME.
DME, AMD [27] 45 OCTs with 15 AMD, 15 DME, and 15 normal subjects. Each OCT volume has 100 B-scan with a resolution of 512x1000 pixels.
[73] 148 OCTs as follows: 50 DME, 50 normal and 48 AMD subjects.
[74] 109309 scans of subjects with DME, drusen, choroidal neovascularization and normal conditions.
DME, AMD, DR [24] 75 OCTs labeled as 16 normal, 20 DME and 39 DR-DME. The OCT volume contains 128 B-scans with a resolution of 512x1024 pixels.
DR, AMD, Glaucoma [46] 231806 OCTs and eye fundus images with the labels of glaucoma, DR and AMD.

Source: Compiled by the authors.

4. Performance methods

Deep learning approaches have shown astonishing results in problem domains like recognition system, natural language processing, medical sciences, and in many other fields. Google, Facebook, Twitter, Instagram, and other big companies use deep learning in order to provide better pplications and services to their customers [75]. Deep learning approaches have active applicatio s using Deep Convolutional Neural Networks (DCNN) in object recognition [76-79], speech recognition [80-82], natural language processing [83], theoretical science [84], medical science [85-86], etc. In the medical field, some researchers apply deep learning to solve different medical problems like diabetic retinopathy [86], detection of cancer cells in the human body [87], spine imaging [88] and many others [89-90]. Although unsupervised learning is applicable in the field of medical science where sufficient labeled datasets for a particular type of the disease are not available. In particular, the state-of-the-art methods in ocular images are based on supervised learning techniques.

4.1 Performance metrics in deep learning models

The performance comparison of deep learning methods in classification tasks is performed by the calculation of statistical metrics. These metrics assess the agreement and disagreement between the expert and the proposed method to grade an ocular disease [35,62,74]. The performance metrics used in state-of-the-art works are presented in Equations (1 - 7) as follows:

where,

TP = True positive (the ground-truth and predicted are non-control class)

TN = True Negative (the ground-truth and predicted are control class)

FP = False Positive (predicted as a non-control class but the ground-truth is control class)

FN = False Negative (predicted as control class but the ground-truth is non-control class)

po = Probability of agreement or correct classification among raters.

pe = Probability of chance agreement among raters.

5. Deep learning methods for diagnosis support

5.1 Dl methods using eye fundus images

The state-of-the-art DL methods to classify ocular diseases using EFIs are focused on conventional or vanilla CNN and multi-stage CNN. The most common vanilla CNN used with EFIs are the pre-trained inception-V1 and V3 models on the ImageNet database (http://www.image-net.org/). The inception-V1 is a CNN that contains different sizes of convolutions for the same input to be stacked as a unique output. Another difference with normal CNN is that the inclusion of convolutional layers with kernel size of 1x1 at the middle and global average pooling at the final of its architecture [79]. On the other hand, inception-V3 is an improved version batch normalization and label smoothing strategies to prevent overfitting [91].

[94] used the U-Net model proposed by [92] to segment the retinal vessel from EFIs. Then, two new datasets were created with and without the vessels to be used as inputs in the inception-V1. This method obtained an AUC of 0.9772 in the detection of DR in the DRIU dataset. [96] and [98] proposed a patch-based model composed of pre-trained inception-V3 to detect DR in the EyePAC dataset. [98] used a private dataset with segmentations of clinical signs to classify an EFI into normal or referable DR with a sensitivity of 93.4 % and specificity of 93.9 %. The ensembled of four inception-V3 CNN by [96] reached an accuracy of 88.72 %, a precision of 95.77 % and a recall of 94.84 %.

The multistage CNN is centered first on the detection of clinical signs to sequentially grade the ocular disease. [95] located different types of lesions to integrate an imbalanced weighting map to focus the model attention in the local signs to classify DR obtaining an AUC of 0.9590. [97] used a similar approach to generate heat maps with the detected lesion as an attention model to grade in an image-level the DR with an AUC of 0.954. [99] uses a four-layers CNN as a patches-based model to segment exudates and the generated exudate mask was used to diagnose DME reporting an accuracy of 82.5 % and a Kappa coefficient of 0.6. Then, [104] proposes a three-stage DL model: optic and cup segmentation, morphometric features estimation and glaucoma grading, with an accuracy of 89.4 %, a sensitivity of 89.5 % and a specificity of 88.9 %. Finally, [101] proposed a model to segment optic disc and cup and calculate a normalized cup-discratio to discriminate healthy and glaucomatous optic nerve of EFIs. Table 2 presents a brief summary of DL methods in eye fundus images used to support an ocular diagnosis.

Table 2 An overview of the main state-of-the-art DL methods to ocular diagnosis using EFIs. Dataset and method used in the study are included with method performance. 

Ocular disease Dataset used Authors Methods Performance
DR DRIVE [93] Gaussian Mixture Model with an ensemble classifier AUC: 0.94
  [94] Pre-trained Inception V1 AUC: 0.9772
EyePACS [95] DCNN with two stages AUC: 0.9590.
[81] [96] An ensemble of 4 pre-trained Inception V3 Acc.: 88.72 %; Precision: 95.77 %; Recall: 94.84 %
EyePACS & E-OPHTHA [97] Two linked DCNN AUC: 0.954 and AUC: 0.949 respectively.
DR EYEPACS & MESSIDOR & Private dataset [98] A pre-trained Inception V3 Sensitivity: 93.4 %; Specificity: 93.9 %.
DME MESSIDOR & OPHTHA [99-100] DCNN with two stages Acc.: 82.5 % Kappa: 0.6
Glaucoma DRISHTI-GS & REFUGE [101] DCNN with two stages AUC: 0.8583
DRISHTI-GS & RIM-ONE [102] Classical filters and an active disc formulation with a local energy function Acc.: 0.8380 and 0.8456.
  [102-103] DCNN with three stages Accuracy: 89.4 %; Sensitivity: 89.5 %; Specificity: 88.9 %; Kappa: 0.82
AMD AREDS [104] DCNN Acc.: 75.7 %

Source: Compiled by the authors.

5.2. Dl methods using optical coherence tomography

The most representative DL methods to detect abnormalities in OCT obtained an outstanding performance using vanilla CNN models as reported with ResNet [35,106], VGG-16 [111] and Inception-V3 [110]. VGG-16 CNN contains five blocks of convolutional layers and max-pooling to perform feature extraction [78]. The final block is composed of three fully connected layers to discriminate among a number of classes. The ResNet model contains a chain of interlaced layers that adds the information from previous layers to future layers to learn residuals errors [112].

[106] used a pre-trained ResNet to differentiate healthy OCT volumes from DR with an accuracy of 97.55 %, a precision of 94.49 %, and a recall of 94.33 %. [24] combined the Inception and the ResNet model into a model termed as inception-ResNet-V2. This model was able to classify DME scans with an accuracy of 100 % using the SERI dataset.

On the other hand, the best DCNN model using OCT volumes as input are customized models with two or three stages. In particular, these DL models used two or more datasets reported in Table 1 to perform feature extraction of local signs, added to a classification stage for grading the ocular diseases as reported for OCTs in [105-107].

[110] defined a two-stage DL method to segment abnormalities from the OCT volume into a 3D representation. The generated segmentation was stacked with the 43 most representative cross-sectional scans from an OCT volume. This model obtained an AUC of 0.9921 to determine the grade of AMD in private datasets. Finally, [109] proposed a customized DL method called OctNet. This CNN is based in four blocks of convolutional and max-pooling layers, and a final block with two dense layers and a dropout layer to avoid overfitting during training. In addition, the proposed model classifies in scan and volume levels, delivering highlighted images with the most relevant areas for the model. The model was assessed for DR and DME detection with a precision of 93 %, an AUC of 0.86 and a Kappa agreement coefficient of 0.71. The proposed model presented a sensitivity of 99 % and an AUC of 0.99 for the classification task of OCT volumes as healthy and AMD. Table 3 reports an overview of the most prominent works used to support the diagnosis of ocular conditions using OCTs.

Table 3 An overview of the main state-of-the-art DL methods to ocular diagnosis using OCTs. Datasets and the methods used in the study with methods performance. 

Ocular disease Dataset Authors Methods Performance
DR OCTID [105] Pre-trained ResNet model Accuracy: 97.55; Precision: 94.49; Recall: 94.33.
DME SERI [35] Pretrained Inception-ResNet-V2 Accuracy: 100 %
SERI+CUHK [106-107] OCTNET with 16 layers, class activation maps and medical feedback Precision: 93.0 %; Kappa: 0.71; AUC: 0.86
Glaucoma POAG [62] A 3D-DCNN with 6 layers AUC: 0.89
AMD A2A SD-OCT [108] HOG Feature Extraction and PCA, with SVM and Multi-Instance SVM classifiers Accuracy: 94.4 %, Sensitivity: 96.8 % Specificity: 92.1 %
  [108-109] OCTNET with 16 layers, class activation maps and medical feedback Sensitivity: 99 %; AUC: 0.99
Private dataset [110] DCNN with two stages by Google AUC: 0.9921
  [111] Pretrained VGG-16 model AUC: 0.9382
  [74] Pretrained Inception-V3 model AUC:0.9745; Accuracy: 93.45 %.

Source: Compiled by the authors.

6. Discussion

This review reports the deep learning state-off-the-art works applied to EFIs and OCT for ocular diagnosis as presented in Tables 2 and 3. The main DL methods in the detection of ocular diseases using EFIs are focused on the fine-tuning of pre-trained CNNs such as Inception V1 [94] and Inception V3 [96]. In addition, the pre-trained CNNs applied to OCT obtained an outstanding performance as reported with pre-trained ResNet [35,105], VGG-16 [111] and Inception V3 [74]. Thus, the feature extraction stage performed by CNNs using a non-medical domain dataset from ImageNet is enough to discriminate healthy and unhealthy patterns from ocular images. On the other hand, the best CNN models using OCT volumes as input are customized models with two or three stages. In particular, these DL models used two or more ocular medical datasets reported in Table 1 to perform the feature extraction of local signs, added to a classification stage for grading the ocular diseases as reported for EFIs in [95,97,99-103] and for OCTs in [106-109].

The number of free public available datasets contributes to the design of new DL methodologies to classify ocular conditions as reported in Table 1. However, the use of a private dataset limits the comparison among performance metrics reached by DL methods [74,98,110-111]. The replication of studies reported by [98] and [110] have been criticized for the lack of information related to the description of the method and the hyperparameters used [113]. The use of public repositories as GitHub (https://github.com/) to share datasets and codes is still a need.

Nowadays, the growing interest of big technology companies and medical centers to create open challenges has increased the number of ocular datasets such as the DR detection by Kaggle [53,84], the blindness detection by the Asia Pacific Tele-Ophthalmology Society (APTOS) [54] and iChallenge for AMD detection by Baidu [34]. These new datasets contain diverse information related to acquisition devices, image resolution, and worldwide population. Moreover, DL techniques are leveraging the new data to the design of new robust approaches with outstanding performances as reported in Tables 2 and 3.

The lack of validation of DCNN models with real-world scans or fundus images is still a problem. We found a couple of methods validated with ocular images from medical centers [96, 108-111]. However, the number of free public real-world ocular images is limited to five sets of images [31,46,53-54,74]. The clinical acceptance of the proposed DCNN models depends critically on the validation in clinical and nonclinical datasets.

Conclusions

Deep Learning methods are novel techniques that detect and classify different abnormalities in eye images with great potential to effectively ocular disease diagnosis. These methods take advantage of a large number of available datasets with different annotations of clinical signs and ocular diseases to perform the automatic feature extraction that supports medical decision making.

In the medical context, new devices such as Optical Coherence Tomography-Angiography (OCTA) require new models to represent and extract features that support the prognosis, diagnosis and follow-up of ocular diseases. Hence, the design of deep learning methods that use multi-modal information such as clinical reports, physiological data and other medical images is still an important issue. The validation of DL methods in the clinical environment with real-world data-sets and images acquired using low-cost devices could improve the social impact of the methods developed.

Despite the outstanding results, there are some open challenges with these methods related to the interpretability and the feedback of medical personnel to the models. In addition, the application of DL models in medical centers could potentially increase the number of subjects diagnosed with the consequent improvement in the quality of life of the population. Realizing the potential of these techniques requires a coordinate, interdisciplinary effort of engineers and ophthalmologists focused on the patient to optimize the medical diagnosis time and costs.

References

[1] A. W. Stitt, N. Lois, R. J. Medina, P. Adamson, and T. M. Curtis (2013), "Advances in Our Understanding of Diabetic Retinopathy," Clinical Science, vol. 125, no. 1, pp. 1-17, 2013. doi:10.1042/CS20120588 [ Links ]

[2] N. Gurudath, M. Celenk, and H. B. Riley, "Machine Learning Identification of Diabetic Retinopathy from Fundus Images," In 2014 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2014, pp. 1-7. doi:10.1109/SPMB.2014.7002949 [ Links ]

[3] R. Priyadarshini, N. Dash, and R. Mishra, "A Novel Approach to Predict Diabetes Mellitus Using Modified Extreme Learning Machine," In 2014 International Conference on Electronics and Communication Systems (ICECS), 2014, pp. 1-5. doi:10.1109/ECS.2014.6892740 [ Links ]

[4] G. Quellec et al ., "Automated Assessment of Diabetic Retinopathy Severity Using Content-Based Image Retrieval in Multimodal Fundus Photographs," Investigative Ophthalmology & Visual Science, vol. 52, no. 11, pp. 8342-8348, 2011. doi:10.1167/iovs.11-7418 [ Links ]

[5] R.A. Welikalaa et al ., "Automated Detection of Proli-ferative Diabetic Retinopathy Using a Modified Line Operator and Dual Classification," Computer Methods and Programs in Biomedicine, vol. 114, no. 3, pp. 247-261, 2014. doi:10.1016/j.cmpb.2014.02.010 [ Links ]

[6] S. Roychowdhury, D. D. Koozekanani, and K. K. Parhi, "DREAM: Diabetic Retinopathy Analysis Using Machine Learning," IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 5, pp. 1717-1728, 2013. doi:10.1109/JBHI.2013.2294635 [ Links ]

[7] D. Usher, M. Dumskyj, M. Himaga, T.H. Williamson, S Nussey, and J. Boyce, "Automated Detection of Diabetic Retinopathy in Digital Retinal Images: A Tool for Diabetic Retinopathy Screening," Diabetic Medicine, vol. 21, no. 1, pp. 84-90, 2004. doi:10.1046/j.1464-5491.2003.01085.x [ Links ]

[8] S. Philip et al ., "The Efficacy of Automated "Disease/ No Disease" Grading for Diabetic Retinopathy in a Systematic Screening Programme," British Journal of Ophthalmology, vol. 91, no. 11, pp. 1512-1517, 2007. doi:10.1136/bjo.2007.119453 [ Links ]

[9] S. C. Cheng and Y. M. Huang, "A Novel Approach to Diagnose Diabetes Based on the Fractal Characteristics of Retinal Images," IEEE Transactions on Information Technology in Biomedicine, vol. 7, no. 3, pp. 163-170, 2003. doi:10.1109/TITB.2003.813792 [ Links ]

[10] M. García, C.I. Sánchez, M.I. López, D. Abásolo, and R. Hornero, "Neural Network Based Detection of Hard Exudates in Retinal Images," Computer Methods and Programs in Biomedicine, vol. 93, no. 1, pp. 9-19, 2009. doi:10.1016/j.cmpb.2008.07.006 [ Links ]

[11] W. Lu, Y. Tong, Y. Yu, Y. Xing, C. Chen, and Y. Shen, "Applications of Artificial Intelligence in Ophthalmology: General Overview," Journal of Ophthalmology, 2018. doi:10.1155/2018/5278196 [ Links ]

[12] D. C. S. Vandarkuzhali, and T. Ravichandran, "Elm Based Detection of Abnormality in Retinal Image of Eye Due to Diabetic Retinopathy," Journal of Theoretical and Applied Information Technology, vol. 6, pp. 423-428, 2005. [ Links ]

[13] B. Antal and A. Hajdu, "An Ensemble-Based System for Automatic Screening of Diabetic Retinopathy," Knowledge-Based Systems, vol. 60, pp. 20-27, 2014. doi:10.1016/j.knosys.2013.12.023 [ Links ]

[14] T. K. Yoo and E. C. Park, "Diabetic Retinopathy Risk Prediction for Fundus Examination Using Sparse Learning: A Cross-Sectional Study," BMC Medical Informatics and Decision Making, vol. 13, no. 1, pp. 106, 2013. doi:10.1186/1472-6947-13-106 [ Links ]

[15] N. H. Cho et al ., "IDF Diabetes Atlas: Global Estimates of Diabetes Prevalence for 2017 and Projections for 2045," Diabetes Research and Clinical Practice, vol. 138, pp. 271-281, 2018. doi:10.1016/j.diabres.2018.02.023 [ Links ]

[16] International Diabetes Federation (IDF), IDF Diabetes Atlas, 8th ed., [online], 2017. Available: Available: https://www.idf.org/e-library/epidemiology-research/diabetes-atlas.html [Accessed Jul. 30, 2019]. [ Links ]

[17] American Diabetes Association, "2. Classification and Diagnosis of Diabetes: Standards of Medical Care in Diabetes-2019," Diabetes Care, vol. 42, no. Supplement 1, pp. S13-S28, 2019. doi:10.2337/dc19-S002 [ Links ]

[18] C. W. Baker, Y. Jiang, and T. Stone, "Recent Advancements in Diabetic Retinopathy Treatment from the Diabetic Retinopathy Clinical Research Network," Current Opinion in Ophthalmology, vol. 27, no. 3, pp. 210, 2016. doi:10.1097/ICU.0000000000000262 [ Links ]

[19] J. W. Y. Yau et al ., "Global Prevalence and Major Risk Factors of Diabetic Retinopathy," Diabetes Care, vol. 35, no. 3, pp. 556-564, 2012. doi:10.2337/dc11-1909 [ Links ]

[20] L. Guariguata, C. Brown, N. Sobers, I. Hambleton, T.A. Samuels, and N. Unwin, "An Updated Systematic Review and Meta-Analysis on the Social Determinants of Diabetes and Related Risk Factors in the Caribbean," Revista Panamericana de Salud Pública, vol. 42, 2018. doi:10.26633/RPSP.2018.171 [ Links ]

[21] Z. Zhang et al ., "A Survey on Computer Aided Diagnosis for Ocular Diseases," BMC Medical Informatics and Decision Making, vol. 14, no. 1, pp. 80, 2014. doi:10.1186/1472-6947-14-80 [ Links ]

[22] A. D. Fleming, S. Philip, K. A. Goatman, J. A. Olson and P. F. Sharp, "Automated Microaneurysm Detection Using Local Contrast Normalization and Local Vessel Detection," in IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1223-1232, Sept. 2006. doi:10.1109/TMI.2006.879953 [ Links ]

[23] P. Porwal et al ., "Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research," Data, vol. 3, no. 3, pp. 25, 2018. doi:10.3390/data3030025 [ Links ]

[24] R. M. Kamble et al ., "Automated Diabetic Macular Edema (DME) Analysis using Fine Tuning with In-ception-Resnet-v2 on OCT Images," 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), Sarawak, Malaysia, 2018, pp. 442-446. doi:10.1109/IECBES.2018.8626616 [ Links ]

[25] R. Bernardes and J. Cunha-Vaz, Eds., "Optical Coherence Tomography: A Clinical and Technical Update," Springer Science & Business Media, 2012. doi:10.1007/978-3-642-27410-7 [ Links ]

[26] M. Niemeijer et al ., "Retinopathy Online Challenge: Automatic Detection of Microaneurysms in Digital Color Fundus Photographs," in IEEE Transactions on Medical Imaging, vol. 29, no. 1, pp. 185-195, Jan. 2010. doi:10.1109/TMI.2009.2033909 [ Links ]

[27] P. P. Srinivasan et al ., "Fully Automated Detection of Diabetic Macular Edema and Dry Age-Related Macular Degeneration from Optical Coherence Tomography Images," Biomedical Optics Express, vol. 5, no. 10, pp. 3568-3577, 2014. doi:10.1364/BOE.5.003568 [ Links ]

[28] D. Zhao et al ., "Improving Follow-Up and Reducing Barriers for Eye Screenings in Communities: The Stop Glaucoma Study," American Journal of Ophthalmology, vol. 188, pp. 19-28, 2018. doi:10.1016/j.ajo.2018.01.008. [ Links ]

[29] M. R. K. Mookiah, U. R. Acharya, C. M. Lim, A. Petznick, and J. S. Suri, "Data Mining Technique for Automated Diagnosis of Glaucoma Using Higher Order Spectra and Wavelet Energy Features," Knowledge-Based Systems, vol. 33, pp. 73-82, 2012. doi:10.1016/j.knosys.2012.02.010 [ Links ]

[30] R. Bock, J. Meier, L. G. Nyul, J. Hornegger, and G. Michelson, "Glaucoma Risk Index: Automated Glaucoma Detection from Color Fundus Images," Medical Image Analysis, vol. 14, no. 3, pp. 471-481, 2010. doi:10.1016/j.media.2009.12.006 [ Links ]

[31] F. Fumero, S. Alayon, J. L. Sanchez, J. Sigut, and M. Gonzalez-Hernandez, "RIM-ONE: An Open Retinal Image Database for Optic Nerve Evaluation," In 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), Jun. 2011, pp. 1-6. doi:10.1109/CBMS.2011.5999143 [ Links ]

[32] S. Maetschke, B. Antony, H. Ishikawa, G. Wollstein, J. Schuman, and R. Garnavi. "A Feature Agnostic Approach for Glaucoma Detection in OCT Volumes," PloS one, vol. 14, no. 7, p. e0219126, 2019. doi:10.1371/journal.pone.0219126 [ Links ]

[33] P. T. De Jong , "Age-Related Macular Degeneration," New England Journal of Medicine, vol. 355, no. 14, pp. 1474-1485, 2006. doi:10.1056/NEJMra062326 [ Links ]

[34] F. Huazhu et al ., iChallenge-AMD, [Online], 2019. Available: http://ai.baidu.com. [ Links ]

[35] S. Farsiu et al ., "Quantitative Classification of Eyes with and without Intermediate Age-Related Macular Degeneration Using Optical Coherence Tomography," Ophthalmology, vol. 121, no. 1, pp. 162-172, 2014. doi:10.1016/j.ophtha.2013.07.013 [ Links ]

[36] Y. F. Chen, I. J. Wang, C. C. Su, and M. S. Chen, "Macular Thickness and Aging in Retinitis Pigmentosa," Optometry and Vision Science, vol. 89, no. 4, pp. 471-482, 2012. doi:10.1097/OPX.0b013e31824c0b0b [ Links ]

[37] H. Mactier, M. S. Bradnam, and R. Hamilton, "Dark-Adapted Oscillatory Potentials in Preterm Infants with and without Retinopathy of Prematurity," Documenta Ophthalmologica, vol. 127, no. 1, pp. 33-40, 2013. doi:10.1007/s10633-013-9373-2 [ Links ]

[38] K. P. Dhamdhere, M. A. Bearse, W. Harrison, S. Barez, M. E. Schneck, and A. J. Adams, "Associations between Local Retinal Thickness and Function in Early Diabetes," Investigative Ophthalmology & Visual Science, vol. 53, no. 10, pp. 6122-6128, 2012. doi:10.1167/iovs.12-10293 [ Links ]

[39] D. Karlica, D. Galetovic, M. Ivanisevic, V. Skrabic, L. Znaor, and D. Jurisic, "Visual Evoked Potential Can Be Used to Detect a Prediabetic Form of Diabetic Retinopathy in Patients with Diabetes Mellitus Type I," Collegium antropologicum, vol. 34, no. 2, pp. 525-529, 2010. doi:10.18203/2320-6012.ijrms20151405. [ Links ]

[40] M. Lovestam-Adrian, L. Granse, G. Andersson, and S. Andreasson, "Multifocal Visual Evoked Potentials (MFVEP) in Diabetic Patients with and without Polyneuropathy," The Open Ophthalmology Journal, vol. 6, p. 98, 2012. doi:10.2174/1874364101206010098 [ Links ]

[41] S. Gupta, T. Khan, G. Gupta, B. K. Agrawal, and Z. Khan, "Electrophysiological Evaluation in Patients with Type 2 Diabetes Mellitus by Pattern Reversal Visual Evoked Potentials," National Journal of Physiology, Pharmacy and Pharmacology, vol. 7, no. 5, pp. 527, 2017. doi:10.5455/njppp.2017.7.1235824012017 [ Links ]

[42] J. Heravian et al ., "Pattern Visual Evoked Potentials in Patients with Type II Diabetes Mellitus," Journal of Ophthalmic & Vision Research, vol. 7, no. 3, p. 225, 2012. [ Links ]

[43] R. Kardon, S. C. Anderson, T. G. Damarjian, E. M. Grace, E. Stone, and A. Kawasaki, "Chromatic Pupillo-metry in Patients with Retinitis Pigmentosa," Ophthalmology, vol. 118, no. 2, pp. 376-381, 2011. doi:10.1016/j.ophtha.2010.06.033 [ Links ]

[44] M. C. Ortube et al., "Comparative Regional Pupillo-graphy as a Noninvasive Biosensor Screening Method for Diabetic Retinopathy," Investigative Ophthalmology & Visual Science, vol. 54, no. 1, pp. 9-18, 2013. doi:10.1167/iovs.12-10241 [ Links ]

[45] J. Threatt, J. F. Williamson, K. Huynh, R. M. Davis, and K. Hermayer, "Ocular Disease, Knowledge and Technology Applications in Patients with Diabetes," The American Journal of the Medical Sciences, vol. 345, no. 4, pp. 266-270, 2013. doi:10.1097/MAJ.0b013e31828aa-6fb [ Links ]

[46] D. Mitry, T. Peto, S. Hayat, J. E. Morgan, K. T. Khaw, and P. J. Foster, "Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the Epic Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium," PloS one, vol. 8, no. 8, p. e71154, 2013. doi:10.1371/journal.pone.0071154 [ Links ]

[47] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, "Ridge-Based Vessel Segmentation in Color Images of the Retina," IEEE transactions on medical imaging, vol. 23, no. 4, pp. 501-509, 2004. doi:10.1109/TMI.2004.825627 [ Links ]

[48] T. Kauppi et al ., "DIARETDB0: Evaluation database and methodology for diabetic retinopathy algorithms," Machine Vision and Pattern Recognition Research Group, Lappeenranta University of Technology, Finland, vol. 73, pp. 1-17, 2006. doi:10.1.1.128.4274 [ Links ]

[49] T. Kauppi et al ., "The Diaretdb1 Diabetic Retinopathy Database and Evaluation Protocol," In BMVC, 2007, vol. 1, pp. 1-10. doi:10.5244/C.21.15 [ Links ]

[50] L. Giancardo et al ., "Microaneurysm Detection with Radon Transform-Based Classification on Retina Images," In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2011, pp. 5939-5942. doi:10.1109/IEMBS.2011.6091562 [ Links ]

[51] M. M. Fraz et al., "An Ensemble Classification-Based Approach Applied to Retinal Blood Vessel Segmentation," IEEE Transactions on Biomedical Engineering, vol. 59, no. 9, pp. 2538-2548, 2012. doi:10.1109/TBME.2012.2205687 [ Links ]

[52] E. Decencière et al ., "TeleOphta: Machine learning and image processing methods for teleophthalmology," Irbm, vol. 34, no. 2, pp. 196-203, 2013. doi:10.1016/j.irbm.2013.01.010 [ Links ]

[53] EyePACS Challenge. Diabetic retinopathy detection of Kaggle, [online]. Available: https://www.kaggle.com/c/diabetic-retinopathy-detection/dataLinks ]

[54] Kaggle, "APTOS 2019 Blindness Detection," kaggle. com, 2019. [Online]. Available : https://www.kaggle.com/c/aptos2019-blindness-detection/dataLinks ]

[55] J. Lowell et al ., "Optic nerve head segmentation," IEEE Transactions on Medical Imaging, vol. 23, no. 2, pp. 256-264, 2004. doi:10.1109/TMI.2003.823261 [ Links ]

[56] A. Budai, R. Bock, A. Maier, J. Hornegger, and G. Michelson, "Robust Vessel Segmentation in Fundus Images," International journal of biomedical imaging, 2013. doi:10.1155/2013/154860 [ Links ]

[57] A. Hoover, V. Kouznetsova, and M. Goldbaum, "Locating blood vessels in retinal images by piece-wise threshold probing of a matched filter response," In Proceedings of the AMIA Symposium, American Medical Informatics Association, 1998, p. 931. doi:10.1109/42.845178 [ Links ]

[58] A. Hoover and M. Goldbaum, "Locating the Optic Nerve in a Retinal Image Using the Fuzzy Convergence of the Blood Vessels," IEEE Transactions on Medical Imaging, vol. 22, no. 8, pp. 951-958, 2003. doi:10.1109/TMI.2003.815900 [ Links ]

[59] D. J. Farnell et al ., "Enhancement of Blood Vessels in Digital Fundus Photographs Via the Application of Multiscale Line Operators," Journal of The Franklin Institute, vol. 345, no. 7, pp. 748-765, 2008. doi:10.1016/j.jfranklin.2008.04.009 [ Links ]

[60] Y. Zheng, M. H. A. Hijazi, and F. Coenen, "Automated 'Disease/No Disease' Grading of Age-Related Macular Degeneration by an Image Mining Approach," Investigative Ophthalmology & Visual Science, vol. 53, no. 13, pp. 8310-8318, 2012. doi:10.1167/iovs.12-9576 [ Links ]

[61] P. Gholami, P. Roy, M. K. Parthasarathy, and V. Lakshminarayanan, "OCTID: Optical Coherence Tomography Image Database," Computers & Electrical Engineering, vol. 81, p. 106532, 2020. doi:10.5683/SP2/W43PFI [ Links ]

[62] E. J. Carmona, M. Rincón, J. García-Feijoó, and J. M. Martínez-de-la-Casa, "Identification of the Optic Nerve Head with Genetic Algorithms," Artificial Intelligence in Medicine, vol. 43, no. 3, pp. 243-259, 2008. doi:10.1016/j.artmed.2008.04.005 [ Links ]

[63] Z. Zhang et al ., "Origa-light: An Online Retinal Fun-dus Image Database for Glaucoma Analysis and Research," In 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, 2010, pp. 3065-3068. doi:10.1109/IEMBS.2010.5626137 [ Links ]

[64] M. Niemeijer et al ., "Automated Measurement of the Arteriolar-To-Venular Width Ratio in Digital Color Fundus Photographs," IEEE Transactions on Medical Imaging, vol. 30, no. 11, pp. 1941-1950, 2011. doi:10.1109/TMI.2011.2159619 [ Links ]

[65] Z. Zhang, J. Liu, F. Yin, B. H. Lee, D. W. K. Wong, and K. R. Sung, "ACHIKO-K: Database of Fundus Images from Glaucoma Patients," In 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), 2013, pp. 228-231. doi:10.1109/ICIEA.2013.6566371 [ Links ]

[66] J. Sivaswamy, S. Krishnadas, A. Chakravarty, G. Joshi, and A. S. Tabish, "A Comprehensive Retinal Image Da-taset for the Assessment of Glaucoma from the Optic Nerve Head Analysis," JSM Biomedical Imaging Data Papers, vol. 2, no. 1, p. 1004. [ Links ]

[67] J. Sivaswamy, S. R. Krishnadas, G. D. Joshi, M. Jain, and A. U. S. Tabish, "Drishti-gs: Retinal Image Dataset for Optic Nerve Head (onh) Segmentation," In 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), 2014, pp. 53-56. doi:10.1109/ISBI.2014.6867807 [ Links ]

[68] A. Almazroa et al ., "Retinal Fundus Images for Glaucoma Analysis: The RIGA Dataset," In Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications, International Society for Optics and Photonics, 2018, vol. 10579, p. 105790B. doi:10.1117/12.2293584 [ Links ]

[69] F. Huazhu et al. "REFUGE: Retinal Fundus Glaucoma Challenge," IEEE Dataport, 2019. [Online]. doi:10.21227/tz6e-r977 [ Links ]

[70] T. E. Clemons, E. Y. Chew, S. B. Bressler, and W. McBee, "National Eye Institute Visual Function Questionnaire in the Age-Related Eye Disease Study (AREDS): AREDS Report no. 10. Archives of Ophthalmology, vol. 121, no. 2, pp. 211-217, 2003. doi:10.1001/archopht.121.2.211 [ Links ]

[71] M. K. Jahromi et al ., "An Automatic Algorithm for Segmentation of the Boundaries of Corneal Layers in Optical Coherence Tomography Images Using Gaussian Mixture Model," Journal of Medical Signals and Sensors, vol. 4, no. 3, pp. 171, 2014. doi:10.4103/2228-7477.137763 [ Links ]

[72] L. Giancardo et al ., "Exudate-Based Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets," Medical Image Analysis, vol. 16, no. 1, pp. 216-226, 2012. doi:10.1016/j.media.2011.07.004 [ Links ]

[73] R. Rasti, H. Rabbani, A. Mehridehnavi and F. Hajizadeh, "Macular OCT Classification Using a Multi-Scale Convolutional Neural Network Ensemble," in IEEE Transactions on Medical Imaging, vol. 37, no. 4, pp. 1024-1034, April 2018. doi:10.1109/TMI.2017.2780115 [ Links ]

[74] D. S. Kermany et al ., "Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning," Cell, vol. 172, no. 5, pp. 1122-1131. doi:10.1016/j.cell.2018.02.010 [ Links ]

[75] S. Paul and L. Singh, "A Review on Advances in Deep Learning," In 2015 IEEE Workshop on Computational Intelligence: Theories, Applications and Future Directions (WCI), 2015, pp. 1-6. doi:10.1109/WCI.2015.7495514 [ Links ]

[76] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet Classification with Deep Convolutional Neural Networks," In Advances in Neural Information Processing Systems, 2012, pp. 1097-1105. doi:10.1145/3065386. [ Links ]

[77] M. D. Zeiler and R. Fergus, "Visualizing and Understanding Convolutional Networks," In European Conference on Computer Vision, 2014, pp. 818-833. doi:10.1007/978-3-319-10590-1_53 [ Links ]

[78] K. Simonyan and A. Zisserman, "Very Deep Convolu-tional Networks for Large-Scale Image Recognition," arXivpreprint arXiv:1409.1556, 2014. [ Links ]

[79] C. Szegedy et al ., "Going Deeper with Convolutions," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9, doi:10.1109/CVPR.2015.7298594 [ Links ]

[80] G. Hinton et al ., "Deep Neural Networks for Acoustic Modeling in Speech Recognition," IEEE Signal Processing Magazine, vol. 29, 2012. doi:10.1109/MSP.2012.2205597 [ Links ]

[81] O. Abdel-Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, "Convolutional Neural Networks for Speech Recognition," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 10, pp. 1533-1545, Oct. 2014. doi:10.1109/TASLP.2014.2339736 [ Links ]

[82] Sainath et al ., "Deep Convolutional Neural Networks for Large-Scale Speech Tasks," Neural Networks, vol. 64, pp. 39-48, 2015. doi:10.1016/j.neunet.2014.08.005 [ Links ]

[83] Kaggle, "Higgs Boson Machine Learning Challenge," kaggle.com, September 2014. [Online]. Available: http://www.kaggle.com/c/higgs-bosonLinks ]

[84] Kaggle, "1000 Fundus Images with 39 Categories," kaggle.com, July 2019. [Online]. Available: https://www.kaggle.com/linchundan/fundusimage1000Links ]

[85] A. de Brebisson and G. Montana, "Deep Neural Networks for Anatomical Brain Segmentation," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 20-28. doi:10.1109/CVPRW.2015.7301312 [ Links ]

[86] H. C. Shin, M. R. Orton, D. J. Collins, S. J. Doran, and M. O. Leach, "Stacked Autoencoders for Unsupervised Feature Learning and Multiple Organ Detection in a Pilot Study Using 4d Patient Data," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1930-1943, 2012. doi:10.1109/TPA-MI.2012.277 [ Links ]

[87] Cancer Imaging Archive, "The Cancer Genome Atlas," cancerimaging.net, 2020. [Online]. Available: http://www.cancerimagingarchive.net/Links ]

[88] Spineweb, "Collaborative Platform for Research on Spine Imaging and Image Analysis," spineweb.digita-limaginggroup.ca, 2016. [Online]. Available: http://spi-neweb.digitalimaginggroup.ca/Links ]

[89] O. J. Perdomo, H. A. Rios, F. J. Rodríguez, and F. A. González, "3D Deep Convolutional Neural Network for Predicting Neurosensory Retinal Thickness Map from Spectral Domain Optical Coherence Tomography Volumes," In 14 th International Symposium on Medical Information Processing and Analysis, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 2018, vol. 10975, p. 1097501. https://doi.org/10.1117/12.2511597Links ]

[90] S. Otálora, O. Perdomo, F. González, and H. Müller, "Training Deep Convolutional Neural Networks with Active Learning for Exudate Classification in Eye Fundus Images," In Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, Springer, Cham, 2017, pp. 146-154. [ Links ]

[91] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818-2826. doi:10.1109/CVPR.2016.308 [ Links ]

[92] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Con-volutional Networks for Biomedical Image Segmentation," In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234-241. doi: 10.1007/978-3-319-24574-4_28 [ Links ]

[93] M. U. Akram, S. Khalid, A. Tariq, S. A. Khan, and F. Azam, "Detection and Classification of Retinal Lesions for Grading of Diabetic Retinopathy," Computers in Biology and Medicine, vol. 45, pp. 161-171, 2014. doi:10.1016/j.compbiomed.2013.11.014 [ Links ]

[94] A. B. Aujih, L. I. Izhar, F. Mériaudeau, and M. I. Shapiai, "Analysis of Retinal Vessel Segmentation with Deep Learning and its Effect on Diabetic Retinopathy Classification," In 2018 International Conference on Intelligent and Advanced System (ICIAS), 2018, pp. 1-6. doi:10.1109/ICIAS.2018.8540642. [ Links ]

[95] Y. Yang, T. Li, W. Li, H. Wu, W. Fan, and W. Zhang, "Lesion Detection and Grading of Diabetic Retinopa thy Via Two-Stages Deep Convolutional Neural Networks ," In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2017, pp. 533-540. doi:10.1007/978-3-319-66179-7_61 [ Links ]

[96] Z. Gao, J. Li, J. Guo, Y. Chen, Z. Yi, and J. Zhong, "Diagnosis of Diabetic Retinopathy Using Deep Neural Networks ," IEEE Access, vol. 7, pp. 3360-3370, 2018. doi:10.1109/ACCESS.2018.2888639 [ Links ]

[97] G. Quellec, K. Charrière, Y. Boudi, B. Cochener, and M. Lamard, "Deep Image Mining for Diabetic Retino-pathy Screening," Medical Image Analysis, vol. 39, pp. 178-193, 2017. doi:10.1016/j.media.2017.04.012 [ Links ]

[98] V. Gulshan et al ., "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs," Jama, vol. 316, no. 22, pp. 2402-2410, 2016. doi:10.1001/jama.2016.17216 [ Links ]

[99] O. Perdomo, J. Arevalo, and F. A. González, "Convolutional Network to Detect Exudates in Eye Fundus Images of Diabetic Subjects," In 12 th International Symposium on Medical Information Processing and Analysis, International Society for Optics and Photonics, 2017, vol. 10160, p. 101600T. doi: 10.1117/12.2256939. [ Links ]

[100] O. Perdomo, S. Otalora, F. Rodríguez, J. Arevalo, and F. A. González, "A Novel Machine Learning Model Based on Exudate Localization to Detect Diabetic Macular Edema," In Ophthalmic Medical Image Analysis Third International Workshop (OMIA), 2016, pp. 137-144. doi:10.17077/omia.1057. [ Links ]

[101] S. Wang, L. Yu, X. Yang, C. Fu and P. Heng, "Patch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation," in IEEE Transactions on Medical Imaging, vol. 38, no. 11, pp. 2485-2495, Nov. 2019. [ Links ]

[102] J. H. Kumar, A. K. Pediredla, and C. S. Seelamantula, "Active Discs for Automated Optic Disc Segmentation," In 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), IEEE, 2015, pp. 225-229. [ Links ]

[103] O. Perdomo, J. Arevalo, and F. A. González, "Combining Morphometric Features and Convolutional Networks Fusion for Glaucoma Diagnosis," In 13 th International Conference on Medical Information Processing and Analysis, International Society for Optics and Photonics, 2017, vol. 10572, p. 105721G. doi:10.1117/12.2285964 [ Links ]

[104] O. Perdomo, V. Andrearczyk, F. Meriaudeau, H. Müller, and F. A. González, "Glaucoma Diagnosis from Eye Fundus Images Based on Deep Morphometric Feature Estimation," In Computational Pathology and Ophthalmic Medical Image Analysis, 2018, pp. 319-327. doi:10.1007/978-3-030-00949-6_38 [ Links ]

[105] P. M. Burlina, N. Joshi, K. D. Pacheco, D. E. Freund, J. Kong, and N. M. Bressler, "Use of Deep Learning for Detailed Severity Characterization and Estimation of 5-Year Risk among Patients with Age-Related Macular Degeneration," JAMA Ophthalmology, vol. 136, no. 12, pp. 1359-1366, 2018. doi:10.1001/jamaophthalmol.2018.4118 [ Links ]

[106] Gholami, P., "Developing Algorithms for the Analysis of Retinal Optical Coherence Tomography Images, Master's thesis, University of Waterloo, 2018. [ Links ]

[107] O. Perdomo, S. Otálora, F. A. González, F. Meriaudeau, and H. Müller , "OCT-net: A Convolutional Network for Automatic Classification of Normal and Diabetic Macular Edema Using Sd-OcT Volumes," In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 2018, pp. 1423-1426. doi:10.1109/ISBI.2018.8363839 [ Links ]

[108] W. Sun, X. Liu, and Z. Yang, "Automated Detection of Age-Related Macular Degeneration in OcT Images Using Multiple Instance Learning," In Ninth International Conference on Digital Image Processing (ICDIP 2017), International Society for Optics and Photonics, 2017, vol. 10420, p. 104203V. doi:10.1117/12.2282522 [ Links ]

[109] O. Perdomo et al ., "Classification of Diabetes-Related Retinal Diseases Using a Deep Learning Approach in Optical Coherence Tomography," Computer Methods and Programs in Biomedicine, vol. 178, pp. 181-189, 2019. doi:10.1016/j.cmpb.2019.06.016 [ Links ]

[110] J. De Fauw et al ., "Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease," Nature Medicine, vol. 24, no. 9, pp. 1342, 2018. doi:10.1038/s41591-018-0107-6 [ Links ]

[111] C. S. Lee, D. M. Baughman, and A. Y. Lee, "Deep Learning is Effective for Classifying Normal Versus Age-Related Macular Degeneration OcT Images," Ophthalmology Retina, vol. 1, no. 4, p. 322-327, 2017. doi:10.1016/j.oret.2016.12.009 [ Links ]

[112] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778. doi:10.1109/CVPR.2016.90 [ Links ]

[113] M. Voets, K. MØllersen, and L. A. Bongo, "Replication Study: Development and Validation of Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. arXiv preprint arXiv:1803.04337. doi:10.1371/journal.pone.0217541. [ Links ]

How to cite: O. J. Perdomo Charry and F. A. González Osorio, "A Systematic Review of Deep Learning Methods Applied to Ocular Images", Cien.Ing.Neogranadina, vol. 30, no. 1, pp. 9-26, Nov. 2019.

Received: July 31, 2019; Accepted: November 08, 2019

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License