<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>0120-6230</journal-id>
<journal-title><![CDATA[Revista Facultad de Ingeniería Universidad de Antioquia]]></journal-title>
<abbrev-journal-title><![CDATA[Rev.fac.ing.univ. Antioquia]]></abbrev-journal-title>
<issn>0120-6230</issn>
<publisher>
<publisher-name><![CDATA[Facultad de Ingeniería, Universidad de Antioquia]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S0120-62302015000100007</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[Detection and Classification of Non-Proliferative Diabetic Retinopathy using a Back-Propagation Neural Network]]></article-title>
<article-title xml:lang="es"><![CDATA[Detección y Clasificación de Retinopatía Diabética No Proliferativa usando una Red Neuronal de Retropropagación]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Velázquez-González]]></surname>
<given-names><![CDATA[Jesús Salvador]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Rosales-Silva]]></surname>
<given-names><![CDATA[Alberto Jorge]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
<xref ref-type="aff" rid="A03"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Gallegos-Funes]]></surname>
<given-names><![CDATA[Francisco Javier]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Guzmán-Bárcenas]]></surname>
<given-names><![CDATA[Guadalupe de Jesús]]></given-names>
</name>
<xref ref-type="aff" rid="A02"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Instituto Politécnico Nacional  ]]></institution>
<addr-line><![CDATA[México D.F. ]]></addr-line>
<country>México</country>
</aff>
<aff id="A02">
<institution><![CDATA[,Instituto Politécnico Nacional  ]]></institution>
<addr-line><![CDATA[México D.F ]]></addr-line>
<country>México</country>
</aff>
<aff id="A03">
<institution><![CDATA[,Instituto Politécnico Nacional  ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>03</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>03</month>
<year>2015</year>
</pub-date>
<numero>74</numero>
<fpage>70</fpage>
<lpage>85</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_arttext&amp;pid=S0120-62302015000100007&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_abstract&amp;pid=S0120-62302015000100007&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_pdf&amp;pid=S0120-62302015000100007&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="en"><p><![CDATA[One of the most serious complications of type 2 Diabetes Mellitus (DM) is the Diabetic Retinopathy (DR). DR is a silent disease and is only recognized when the changes on the retina have progressed to a level at which treatment turns complicate, so an early diagnosis and referral to an ophthalmologist or optometrist for the management of this disease can prevent 98% of severe visual loss. The aim of this work is to automatically identify Non Diabetic Retinopathy (NDR), and Background Retinopathy using fundus images. Our results show a classification accuracy of 92%, with sensitivity and specifity of 95%.]]></p></abstract>
<abstract abstract-type="short" xml:lang="es"><p><![CDATA[Una de las complicaciones más graves de la Diabetes Mellitus tipo 2 es la Retinopatía Diabética (RD). La RD es una enfermedad silenciosa y solo es reconocida por el portador cuándo los cambios en la retina han progresado a un nivel en el cual el tratamiento se complica, por lo que el diagnóstico oportuno y la remisión al oftalmólogo u optometrista para el manejo de esta enfermedad pueden prevenir el 98% de la pérdida visual grave. El objetivo de este trabajo es identificar de manera automática la No Retinopatía Diabética (NRD) y la Retinopatía de Fondo, utilizando imágenes del fondo de ojo. Nuestros resultados muestran una efectividad del 92%, con una sensitividad y especificidad del 95%.]]></p></abstract>
<kwd-group>
<kwd lng="en"><![CDATA[Diabetic Retinopathy]]></kwd>
<kwd lng="en"><![CDATA[early diagnosis]]></kwd>
<kwd lng="en"><![CDATA[automatically identification]]></kwd>
<kwd lng="en"><![CDATA[fundus images]]></kwd>
<kwd lng="es"><![CDATA[Retinopatía Diabética]]></kwd>
<kwd lng="es"><![CDATA[diagnóstico temprano]]></kwd>
<kwd lng="es"><![CDATA[identificación automática]]></kwd>
<kwd lng="es"><![CDATA[imágenes de fondo]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[  <font face="Verdana" size="2">     <p align="right"><b>ART&Iacute;CULO ORIGINAL</b></p>     <p align="right">&nbsp;</p>     <p align="center"><b><font size="4">Detection and Classification of Non-Proliferative   Diabetic Retinopathy using a Back-Propagation Neural Network</font></b></p>     <p align="center">&nbsp;</p>     <p align="center"><b><font size="3">Detecci&oacute;n y Clasificaci&oacute;n de Retinopat&iacute;a Diab&eacute;tica No   Proliferativa usando una Red Neuronal de Retropropagaci&oacute;n</font></b></p>     <p align="center">&nbsp;</p>     <p align="center">&nbsp;</p>     <p><b><i>Jes&uacute;s   Salvador Vel&aacute;zquez-Gonz&aacute;lez</i><sup>1</sup><i>, Alberto Jorge   Rosales-Silva</i><sup>1<i>*</i></sup><i>, Francisco Javier   Gallegos-Funes</i><sup>1</sup><i>, Guadalupe de Jes&uacute;s</i> <i>Guzm&aacute;n-B&aacute;rcenas</i><sup>2</sup></b></p>     <p><sup>1 </sup>Escuela   Superior de Ingenier&iacute;a Mec&aacute;nica y El&eacute;ctrica Unidad Zacatenco, Instituto   Polit&eacute;cnico Nacional. Av. Luis Enrique Erro s/n, Unidad Profesional Adolfo   L&oacute;pez Mateos, Zacatenco, Delegaci&oacute;n Gustavo A. Madero. C.P. 07738. M&eacute;xico D.F.,   M&eacute;xico.</p>     ]]></body>
<body><![CDATA[<p><sup>2</sup> Centro   Interdisciplinario de Ciencias de la Salud Unidad Santo Tomas, Instituto   Polit&eacute;cnico Nacional. Av. de   los Maestros s/n casi esquina, Calzada de los Gallos Col., Santo Tom&aacute;s,   Delegaci&oacute;n Miguel Hidalgo. C.P.   11340. M&eacute;xico D.F., M&eacute;xico. </p>          <p>*  Corresponding author: Alberto Jorge Rosales Silva, e-mail: <a href="mailto:arosaless@ipn.mx"> arosaless@ipn.mx</a></p>     <p>&nbsp;</p>     <p align="center">(Received February 12, 2014; accepted October 20,   2014)</p>     <p>&nbsp;</p>     <p>&nbsp;</p> <hr noshade size="1">     <p><b><font size="3">Abstract</font></b></p>     <p>One of the most serious complications of type 2   Diabetes Mellitus (DM) is the Diabetic Retinopathy (DR). DR is a silent disease   and is only recognized when the changes on the retina have progressed to a   level at which treatment turns complicate, so an early diagnosis and referral   to an ophthalmologist or optometrist for the management of this disease can   prevent 98% of severe visual loss. The aim of   this work is to automatically identify Non Diabetic Retinopathy (NDR), and   Background Retinopathy using fundus images. Our results show a classification   accuracy of 92%, with sensitivity and specifity of 95%.</p>     <p><i>Keywords</i>: Diabetic Retinopathy,   early diagnosis, automatically identification, fundus images</p>   <hr noshade size="1">     <p><b><font size="3">Resumen</font></b></p>     ]]></body>
<body><![CDATA[<p>Una de las complicaciones m&aacute;s graves de la Diabetes   Mellitus tipo 2 es la Retinopat&iacute;a Diab&eacute;tica (RD). La RD es una enfermedad   silenciosa y solo es reconocida por el portador cu&aacute;ndo los cambios en la retina   han progresado a un nivel en el cual el tratamiento se complica, por lo que el   diagn&oacute;stico oportuno y la remisi&oacute;n al oftalm&oacute;logo u optometrista para el manejo   de esta enfermedad pueden prevenir el 98% de la p&eacute;rdida visual grave. El   objetivo de este trabajo es identificar de manera autom&aacute;tica la No Retinopat&iacute;a   Diab&eacute;tica (NRD) y la Retinopat&iacute;a de Fondo, utilizando im&aacute;genes del fondo de   ojo. Nuestros resultados muestran una efectividad del 92%, con una sensitividad   y especificidad del 95%.</p>     <p><i>Palabras clave</i>: Retinopat&iacute;a Diab&eacute;tica, diagn&oacute;stico temprano,   identificaci&oacute;n autom&aacute;tica, im&aacute;genes de fondo</p>   <hr noshade size="1">     <p><b><font size="3">Introduction</font></b></p>     <p>The   DM is defined as a set of chronic and degenerative disorders that involves   alterations in the metabolism of carbohydrates, lipids, and proteins, as a   consequence of a decreasing in the production of the hormone insulin for the &#946; cells from the pancreas, and a resistance to the   hormone's action in the different tissues [1]. One of the most serious   complications of the DM is the DR [2], which is the main cause of worldwide   blindness in the economically active population, because it affects people   between 20 to 74 years old [3, 4]. Two types of clinical DR exist:   Non-Proliferative Diabetic Retinopathy (NPDR), also called Background   Retinopathy and Proliferative Diabetic Retinopathy (PDR), as shown in <a href="#Figura1">figure 1</a>.   Unfortunately, the DR is commonly detected only in advanced stages (PDR), with   unfavorable forecast even with the right treatment. So, a timely diagnosis in   the first stages of the NPDR namely: mild (level 1), moderate (level 2) and   severe (level 3), and besides the proper canalization with the visual health specialists for the opportune   treatment can control the severe visual loss in ranges of 98% of the cases [5].   The diagnosis emitted by the visual health specialist, based on the observation   of the retinal damage or retinal lesions in fundus images (<a href="#Figura2">Figure 2</a>) has an   approximately precision of 90% [6]. The presence of those lesions in various   degrees determines whether the NPDR is 'mild', 'moderate' and 'severe'.</p>     <p align="center"><a name="Figura1"></a><img src="img/revistas/rfiua/n74/n74a07i01.gif" /></p>     <p align="center"><a name="Figura2"></a><img src="img/revistas/rfiua/n74/n74a07i02.gif" /></p>     <p>The first injuries clinically detectable   of the DR are the microaneurysms [8], and they are focal dilations of the walls   of the retinal capillaries, which appear as red small points [9] with diameters   between 10 and 100 micron. The hard exudates are deposits of lipids, and they   are the result of leakage of blood fluids of the microaneurysms, and the   capillary that surrounds the macular area. If the lipid is extended to the   macular zone, the vision may be seriously compromised [10]. The hard exudates   have a yellowish appearance, they don't have form or size defined and they are located in every part of the retina [8, 10].</p>     <p>A Computer-Aided Diagnosis System (CADx   System) is defined as the combination of digital image processing techniques   and intelligent methods such as Artificial Neural Networks (ANN) and Fuzzy   Logic (FL), employed to improve the diagnosis made by medical interpretation,   and providing a more efficient result diagnosis [11]. With the proposed CADx System, there exist the   opportunity to analysis the digital fundus images, and assists in the   visualization and quantification of the anatomic structure and the presence of   anatomic-pathologist alterations such as blood vessel's segmentation,   microaneurysms and hard exudates.</p>     <p>There have been some research   investigations to detect and classify the NDR and the stages of NPDR. In [12] is   proposed the use of a Back-Propagation Neural Network (BPNN) to classify NPDR.   The network reached a sensitivity of 88.4% and a specificity of 83.5% for the   DR detection (where sensibility is the probability to classify correctly a sick   individual, and the specificity is used to diagnose the health of the   individuals). The image pre-processing methods used for each of the detected   features (blood vessels, exudates and haemorrhages) are   pre-processing filters: Median smoothing was used for the detection of exudates   and haemorrhages. This was a 9 point neighbourhood median filter. For each   pixel in this filter, the nearest 8 neighbouring pixels were compared and the   median value substituted into the centre pixel. Maximum median filtering was   used for exudates where the maximum neighbourhood value was substituted, and   the minimum median filtering was used for haemorrhages where the minimum pixel   value was used. For the detection of blood vessels, a 9 point averaging filter   was first used followed by a Sobel edge detection filter. In [13] is proposed   a novel computer-based image analysis method that is being developed to assist   and automate the diagnosis of retinal disease. They use feature description   models and perceptual organization for low level analysis and spatial   relationships and clinical metadata to extract semantic information in a higher   level analysis. The sensitivity and accuracy for NPDR ranged from 75% to 94.7%.   In [14]   is proposed an automated system based on ANN for eye disease classification.   Abnormal fundus images from four different classes namely NPDR, Central retinal   vein occlusion (CRVO), Choroidal neovascularization membrane (CNVM) and central   serous retinopathy (CSR) are used in this work. A suitable feature set is   extracted from the pre-processed images and fed to the classifier.   Classification of the four eye diseases is performed using the supervised ANN   namely BPNN; the network achieved a sensitivity of 84%   and a specificity of 97.3%.   The results are compared with the statistical classifier namely <i>minimum distance classifier</i> to justify   the superior nature of the ANN based classification. In [15] is proposed an   automated DR diagnosis system used to detect various lesions of the retina,   i.e. exudates, microaneurysms and hemorrhages, using mathematical morphological   operators and filters, genetic algorithm and Fuzzy clustering. They did not   present the sensitivity and specificity of their work, but they obtained an   interesting conclusion; There are certain features present in the normal   physiology of the retina which have to be differentiated from the abnormal   pathology, e.g. optic disc has the same pixel brightness as the exudates and   thus has to be localized before establishing the presence of the exudates.   Similarly, the blood vessel and fovea region have to be subtracted from the   retinal image before diagnosing microaneurysms and hemorrhages, which are one   of the principles of our work. In [16] is proposed an automated detection of DR   for early diagnosis using Feature Extraction and Support Vector Machine, with   an average accuracy of 93%. The SVM classifier was trained through supervised   learning for the features extracted to classify the retinal images. In [17] is proposed   a method for the Retinal image analysis through efficient detection of exudates   and recognizes the retina to be normal or abnormal. 110 images were trained and   tested in order to extract the exudates and blood vessels. In this system, they   used the Probabilistic Neural Network (PNN) for training and testing the   pre-processed images. There is 98% accuracy in the detection of the exudates in   the retina. In [18] is studied the effectiveness of two non-stereoscopic   digital 50 degree photograph of each eye (one centered on the fovea with the   nasal edge of the optic disc at the edge of the photograph and a nasal   &#64257;eld with one disc diameter at the temporal edge of the optic disc) in   the grading of DR, in comparison to 35-mm color slides. Two-field digital   non-stereoscopic retinal photographs and two-field 35-mm retinal photographs   were made at the same time from patients visiting a DR outpatient clinic. The   digital images were stored integrally (TIFF-file) and in a compressed way   (JPEG-file). Two ophthalmologists assessed the photographs in a masked fashion.   The sensitivity for the detection of vision-threatening DR using the   JPEG-stored images was 0.72-0.74, and the specificity was 0.93-0.98. The   sensitivity for vision-threatening retinopathy detection using the integrally   stored images was 0.86-0.92, the specificity was 0.93. They concluded that, the   compression of the digital images seems to have some adverse effect on the   detection of DR.</p>     <p><b><font size="3">Material and Proposal Methodology</font></b></p>     ]]></body>
<body><![CDATA[<p>The data set was downloaded from TECHNO-VISION [19]   and consists in 216 RGB color retinal images, (54 images without DR or normal,   and 54 for each stage of NPDR). The images were taken by TOPCON TRC NW6 fundus   camera interfaced to a computer, stored in 24-bit TIFF format with an images   size of 3504 x 2336 pixels, 2304 x 1536 pixels and 1440 x 960 pixels. For this   work, 143 samples were used for training and the remaining 73 samples were used   for testing the ANN.</p>     <p><b><i>Preprocessing</i></b></p> </font>     <p><font size="2" face="Verdana">The fundus images were preprocessed using   different image preprocessing techniques such as normalization, color   decomposition, color space conversion, image enhancement or intensity   inversion. <i>Normalization</i>:   Standardizes the size of the images at 720 x 560 pixels using the bicubic   interpolation. <i>Color decomposition</i>:   Separates the channels of an image into separated images or layers, in our case   Red, Green and Blue. <i>Space conversion</i>:   Transforms the RGB color image to a grayscale image applying the equation <i>Gray</i>= 0.299R + 0.587G + 0.114B where <i>R</i>, <i>G</i>, and <i>B</i> are the Red, Green, and Blue channels, respectively. It has been   demonstrated in the scientific literature that working in grayscale images   instead of using the three planes does not represent any difference in the   quantitative results, only in computational charge. <i>Contrast enhancement</i>: enhances details no perceived by the human   eye, the method used for us to contrast enhancement were the histogram   equalization, which adjust a low-contrast grayscale of the image. <i>Intensity inversion</i>: Enhances white or   gray details. It is used especially when the dark areas are dominant in size   and it is reached applying the equation <i>S = L-1-r </i>, where <i>s</i> is the pixel value after processing, <i>L</i>  is the image input intensity level, and <i>r</i> is the pixel value before the processing [20].</font></p> <font face="Verdana" size="2">    <p><b><i>Segmentation</i></b></p>     <p>Next step is to segment the image obtained   in homogeneous regions to an automatic analysis and recognition. The general   algorithms used in the development of the proposal CADx System are well-known   in the literature [21-24]. <i>Binarization</i>:   defined by<img src="img/revistas/rfiua/n74/n74a07ea01.gif" />, where  is a threshold established by Otsu's algorithm   that changes [23]. <i>Edge detection</i> [20, 24], to make this operation, was employed Canny's algorithm<b> </b>[25]. <i>Mathematical morphology</i>:<b> </b>were   used <i>Dilation</i> and <i>Erosion</i> [20]. Equations applied to   closure is <i>A</i>&#9679;<i>B</i> = (<i>A</i><i>B</i>)<i>B </i>and to opening is: <i>A</i>&#9675;<i>B</i> = (<i>A</i><i>B</i>)<i>B</i>.</p>     <p><b><i>Features extraction</i></b></p>     <p>Consist in extract relevant information of   the features and regions of interest (ROIs) from the retinal images, which will   be part of the input layer of the ANN (see <b><i>Decision making</i></b>). In the proposed   CADx System the ROIs are Blood vessels (BV), microaneurysms (&micro;Ans) and hard   exudates (HE), while the texture (homogeneity and entropy) is a feature of   interest in the fundus image. A brief description of those extractions is given   below.</p>     <p><i>Border Formation</i></p>     <p>Two methods were implemented as shown in <a href="#Figura3">figure   3</a>. The border contains noise if the output's pixel is different from the   reference binary pixel ''1'' (white). When the both pixels are binary 1, the   resulting image is the circular border.</p>     <p align="center"><a name="Figura3"></a><img src="img/revistas/rfiua/n74/n74a07i03.gif" /></p>     ]]></body>
<body><![CDATA[<p><i>Blood Vessels (BV)</i></p>     <p><a href="#Figura4">Figure 4</a> shows the block diagram of the   developed algorithm for the blood vessels segmentation in a RGB color fundus   image. The Logical AND operation, achieved in the noisy and optical disk images,   is applied to obtain the final segmentation of the BV. The first one is the   image of the BV with noise presence (see <a href="#Figura5">Figure 5i</a>) and the second one with the   BV and the Optical Disk (OD) (see <a href="#Figura5">Figure 6e</a>). The OD is represented as a black   color circle (<a href="#Figura5">Figure 5d</a>, <a href="#Figura5">5e</a> and <a href="#Figura5">5f</a>); the anterior is achieved by finding the   maximum brightness value for every one of the 720 columns of the image. The   elimination of the OD (<a href="#Figura5">Figure 5g</a>) is obtained subtracting the equalized image   (<a href="#Figura5">Figure 5e</a>) from the resulting image in the morphological opening (<a href="#Figura5">Figure 5f</a>), after   this procedure, the image is binarized (<a href="#Figura5">Figure 5h</a>) and is realized a new   morphological opening procedure to eliminate the noise in the image (<a href="#Figura5">Figure   5i</a>).</p>     <p align="center"><a name="Figura4"></a><img src="img/revistas/rfiua/n74/n74a07i04.gif" /></p>     <p align="center"><a name="Figura5"></a><img src="img/revistas/rfiua/n74/n74a07i05.gif" /></p> </font>     <p><font size="2" face="Verdana">Sometimes, certain details are lost after   segmentation stage, corresponding to the BV in the OD region. For this reason,   it is necessary to create a mask of the OD [26], for which the expression <img src="img/revistas/rfiua/n74/n74a07ea02.gif" /> is employed; where <i>h</i> and <i>k</i> are the coordinates of the rows and columns   respectively, and <i>R</i> is the radius of the circle or mask of the OD   (see <a href="#Figura6">Figure 6a</a>). Mask creation (OD creation) is used in the detection of BV,   &micro;Ans and HE. Subsequently, the contrast of the image (<a href="#Figura6">Figure 6b</a>) is improved   and after this stage, is binarized through the Otsu's algorithm (<a href="#Figura6">Figure 6c</a>), to   this binarized image is realized a morphological opening procedure (<a href="#Figura6">Figure 6d</a>)   and after that, to this binarized image is overlapped the mask of the OD   (<a href="#Figura6">Figure 6e</a>). Finally, the logical AND operation is implemented using the   corresponding images to the BV in presence of noise (<a href="#Figura6">Figure 5i</a>) and the BV connected to the mask with the OD (<a href="#Figura6">Figure 6e</a>).</font></p> <font face="Verdana" size="2">    <p>The AND logic is applied to mark out the   similar pixels of the two images. The output pixel is registered as binary 1   (white) when the both pixels of the images are binary 1 (white). The resulted   image is a higher quality BV image (<a href="#Figura6">Figure 6f</a>).</p>     <p align="center"><a name="Figura6"></a><img src="img/revistas/rfiua/n74/n74a07i06.gif" /></p>     <p><i>Microaneurysms</i></p>     <p><a href="#Figura7">Figure 7</a> shows the microaneurysms (&micro;Ans)   segmentation with noise presence. The grayscale image was used to detect the   circular border and the OD mask, the green component of a color image was used   in the rest of the algorithm. The procedure is as follows: the morphological opening   of the BV elimination. The first step in the elimination process of the BVs is   achieved realizing a morphological opening at the image, computing the second   contrast enhancement and then applying a Logical AND operation using the   previous image and the image without HE. To eliminate the HE, is necessary to   apply a Logical AND operation using the image binarized after the first   contrast enhancement and the resulting image of the &micro;Ans segmentation with   noise presence. To achieve this, we execute a Logical AND operation using the   resulting images of the edge detection after the first contrast enhancement and   the morphological opening of the edge detection before the contrast   enhancement. The second one is the OD mask, and the third one is the morphological   opening of the edge detection without any contrast enhancement.</p>     <p align="center"><a name="Figura7"></a><img src="img/revistas/rfiua/n74/n74a07i07.gif" /></p>     ]]></body>
<body><![CDATA[<p><a href="#Figura8">Figure 8</a> shows the microaneurysms (&micro;Ans)   segmentation, where <a href="#Figura8">Fig. 8a</a>) illustrates the binarized image, the <a href="#Figura8">Fig. 8b</a>)   indicates hard exudates suppression, finally in the <a href="#Figura8">Fig. 8f</a>) is present the final segmentation of the microaneurysms, isolating efficiently the blood vessels.</p>     <p align="center"><a name="Figura8"></a><img src="img/revistas/rfiua/n74/n74a07i08.gif" /></p>     <p><i>Hard Exudates</i></p>     <p>The block diagram of <a href="#Figura9">figure 9</a> shows the process to   obtain the hard exudates.</p>     <p align="center"><a name="Figura9"></a><img src="img/revistas/rfiua/n74/n74a07i09.gif" /></p>     <p>We implement the   contrast enhancement applied to the image improvement before using the Canny's   algorithm to detect the outlines of the image. After, we implemented the   morphological gradient to obtain the circular border of the fundus image. The   morphological closing was the method used in the BV suppression, which consists   of the dilation followed by the erosion. The function of dilation expands the   area of the hard exudates, while the erosion function removes the BV as shown   in <a href="#Figura10">Fig. 10d</a>).</p>     <p align="center"><a name="Figura10"></a><img src="img/revistas/rfiua/n74/n74a07i10.gif" /></p>     <p>The next step   consists in detecting the location of the hard exudates, and eliminates all the   objects foreign to them. To eliminate the OD, the image is binarized without   the BV and the obtained images are subtracted from the OD mask. To eliminate   the circular border, the image is subtracted without OD, with the image of the   circular edge in a fundus image, and the result of these differences is applied   to the method of morphological closing, afterward is binarized and inverted the   values. Finally, the logical operation AND is implemented between the resulting   image obtained from the morphological closing and the binarized one. <a href="#Figura11">Figure 11</a>  shows the results obtained in every one of the segmentation processes of the hard exudates.</p>     <p align="center"><a name="Figura11"></a><img src="img/revistas/rfiua/n74/n74a07i11.gif" /></p>     <p><i>Area of segmented features</i></p> </font>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana">The following   formula is conducted to obtain parameters of the areas associated with the   objects: the measurement of the pixel's number in its contour:<img src="img/revistas/rfiua/n74/n74a04ea06.gif" />, where <i>seg(i,j)</i> is   a pixel of the segmented object, <i>M</i> and <i>N</i> are the spatial coordinates of   the fundus image.</font></p> <font face="Verdana" size="2">    <p><i>Texture analysis</i></p>     <p>The conventional systems of texture analysis can be   grouped in three categories: structural, statistics and spectral [3]. We   propose the use of statistical algorithms based on the relationship between   intensity pixel values [26]; the measures include the homogeneity and the   entropy on the Gray Level Co-occurrence Matrix (GLCM) [22]. <a href="#Figura12">Figure 12</a> presents   the block diagram of the texture analysis proposed.</p>     <p align="center"><a name="Figura12"></a><img src="img/revistas/rfiua/n74/n74a07i12.gif" /></p> </font>     <p><font size="2" face="Verdana">The GLCM is obtained computing the   frequency of each pixel pair occurring for different combinations of their   brightness values in an image. For a two dimensional image &#402;<i>(x,y)</i> with N discrete gray levels, we define the GLCM <i>P<sub>d,&phi;</sub> (a,b)</i> for each d  and <i>&Phi;</i>, and is given by (Eq. 1), where<i>.</i></font></p> <font face="Verdana" size="2">    <p><img src="img/revistas/rfiua/n74/n74a07e01.gif" /></p>     <p>The homogeneity is   the measurement of the closeness of the distribution of elements in the Grey   Level co-occurrence Matrix (GLCM) to the GLCM diagonal, and returns a value   between 0 and 1. The homogeneity can be mathematically written as <img src="img/revistas/rfiua/n74/n74a07ea04.gif" />, where <img src="img/revistas/rfiua/n74/n74a07ea05.gif" /> describes the repetition frequency of <i>a</i> and<i> b</i> pixels in the window, separated a distance <i>d</i> in the direction <i>&phi;</i> [20]. On the other hand, the entropy is the   statistical measure of the randomness of the grayscale image's texture and it   is defined as <img src="img/revistas/rfiua/n74/n74a07ea06.gif" />. Here, the adaptive histogram   equalization is applied twice to enhance the contrast and texture of the green   channel of the fundus image.</p>     <p><b><i>Decision making</i></b></p>     <p>The diagnosis of   the NDR and NPDR stages are achieved through the analysis of fundus images by   means of the BPNN shown in <a href="#Figura13">figure 13</a> [10]. The input layer is composed of five   neurons, which correspond to the number of characteristics employed for the   detection and classification of the NDR and NPDR stages, which are blood   vessels, microaneurysms, hard exudates, homogeneity and entropy. The hidden   layer is formed by two layers with ten neurons each, the number of neurons is   determined using the theorem of Kolmogorov [23]. The output's layer will   classify four classes as NDR (Normal), Light NPDR, Moderate NPDR and Severe   NPDR.</p>     <p align="center"><a name="Figura13"></a><img src="img/revistas/rfiua/n74/n74a07i13.gif" /></p>     ]]></body>
<body><![CDATA[<p>The network was trained using a   supervised learning method with a given set of training data of 143 fundus   images and then tested with 73 samples. During the training phase, each output   of the BPNN is a value in the range [0,1], whereas the 'desired' output value   is either [0,1]. The mean square error of the BPNN was 0.001 and it was   obtained after 290 iterations (<a href="#Figura14">Figure 14</a>). The convergence value indicates that the maximum error accepted for every sample must be less than 0.1%.</p>     <p align="center"><a name="Figura14"></a><img src="img/revistas/rfiua/n74/n74a07i14.gif" /></p>     <p><b><font size="3">Experimental Results</font></b></p>     <p>The features such as <i>blood vessels</i>, <i>microaneurysms</i>, <i>hard exudates</i> areas, homogeneity and   entropy values were extracted using the above algorithms. The parameter values   obtained are shown in <a href="#Tabla1">table 1</a>.</p>     <p align="center"><a name="Tabla1"></a><img src="img/revistas/rfiua/n74/n74a07t01.gif" /></p>     <p>The <i>p</i>-value   can be obtained using ANOVA (analysis of variance between groups) test [27].   The result of the ANOVA test for the perimeter and area of different types of   images is shown in <a href="#Tabla2">Table 2</a>. It can be seen from <a href="#Figura15">Table 2</a> that our features are   clinically significant (<i>p</i>&lt;0.01).   Figure 15 shows the final segmentation of the blood vessels, microaneurysms and   hard exudates using four fundus images in RGB color space in TIFF format; one   of this fundus images does not have DR and the remaining three present some   stage of the NPDR. These results were obtained after analyzing the 73 fundus   images, where 19 are without DR and 54 possess NPDR.</p>     <p align="center"><a name="Figura15"></a><img src="img/revistas/rfiua/n74/n74a07i15.gif" /></p>     <p>By other hand, these results can be   presented in a quantitative way through quality indexes employed in the   evaluation of the medical diagnosis systems, which are the sensibility and the   specificity [13]. The <i>Sensibility</i> [28], mathematically is expressed by (Eq. 2):</p>     <p><img src="img/revistas/rfiua/n74/n74a07e02.gif" /></p>     <p>The <i>Specificity</i> [28], mathematically is expressed by (Eq. 3):</p>     ]]></body>
<body><![CDATA[<p><img src="img/revistas/rfiua/n74/n74a07e03.gif" /></p> </font>     <p><font size="2" face="Verdana">where <i>True   positives<b> (</b>TP<b>) </b></i>are when a sickness is present and the diagnosed patient is   really sick, <i>False Positives<b> (</b>FP<b>) </b></i>are when the sickness is not present but the patient is   diagnosed as sick, <i>True Negatives<b> (</b>TN<b>)</b></i> are when the sickness is not present and is diagnosed the   patient as healthy, and <i>False Negatives<b> (</b>FN<b>)</b></i> are when the sickness is present and this is not diagnosed.   In this paper, the BPNN made mistakes that do not enter in the anterior   classification provided for the medical (<i>False   Positives</i> and <i>False Negatives</i>). <i>True Positive</i> diagnosis the NN was   unable to indicate the pertinent stage of the patient really sick; therefore,   it was necessary to compute the error, which is defined as the total number of   no classified <i>True Positives</i> in the   right stage (<i>Errors<sub>total</sub></i>), between the   total number of samples, mathematically is expressed by (Eq. 4):</font></p> <font face="Verdana" size="2">    <p><img src="img/revistas/rfiua/n74/n74a07e04.gif" /></p>     <p><a href="#Tabla2">Table 2</a> shows the relationship between the   results of diagnostic tests and the presence or absence of an illness. The   total number of samples employed was 73 images.</p>     <p align="center"><a name="Tabla2"></a><img src="img/revistas/rfiua/n74/n74a07t02.gif" /></p>     <p>Sensibility, specificity and error tests were   conducted to avoid the BPNN (see <a href="#Tabla3">Table 3</a>).</p>     <p align="center"><a name="Tabla3"></a><img src="img/revistas/rfiua/n74/n74a07t03.gif" /></p>     <p><b><font size="3">Result Discussions</font></b></p>     <p>From the segmented images, the area of   blood vessels, microaneurysms and hard exudates are calculated by finding the   total number of blood vessels, microaneurysms and hard exudates respectively.   While, the set of features which provides more meaningful information for   classification are extracted from the selected cluster using GLCM. The features   extracted from the selected clusters are Homogeneity and Entropy, where:   Homogeneity is related to the closeness of the distribution of elements in the   GLCM to the GLCM diagonal and range = [0 1] is measured by homogeneity.   Homogeneity is 1 for a diagonal GLCM. So, in a homogeneous image, there are   very few dominant gray-tone transitions; hence this image will have fewer   entries of large magnitude. So, the energy of an image is high when the image   is homogeneous. Entropy is the randomness of a gray level distribution. The   entropy is high if the gray levels are distributed randomly throughout the   image.</p>     <p>Thus, the total number of features used in   this work is five from two different categories. These features are found to be   more suitable for medial image processing. In order to know the behavior of the   BPNN in every stage of the NPDR, the error was computed in percentages. <a href="#Tabla4">Table 4</a>  presents the error percentages corresponding to every one of the NPDR stages.</p>     ]]></body>
<body><![CDATA[<p align="center"><a name="Tabla4"></a><img src="img/revistas/rfiua/n74/n74a07t04.gif" /></p>     <p>An error of 0% was obtained in the light   NPDR, which indicates that this CADx System is capable of classifying this   stage. Also, an error of 5% was obtained in the moderate NPDR, while that in   the classification of the severe NPDR was of 3%. These results show that in   some cases the CADx System was unable to classify accurately the stage of the   patient really sick; this is due to the similarities that exist between the characteristics that define both stages.</p>     <p>The CADx System shown in this work is   based on basic methods of image processing, which allows to detect and   diagnose, in an automatic and rapid manner (less than 1 minute), the NDR and   the NPDR stages, with good results and low computational cost against the other   methods used. <a href="#Tabla5">Table 5</a> shows the comparative results between the proposed   algorithm and others found in the literature review.</p>     <p align="center"><a name="Tabla5"></a><img src="img/revistas/rfiua/n74/n74a07t05.gif" /></p>     <p><b><font size="3">Conclusions</font></b></p>     <p>Diabetic Retinopathy is a condition where   the retina is damaged due to fluid leaking from the blood vessels into the   retina. In extreme cases, the patient will become blind. Therefore, early   detection of the Diabetic Retinopathy is crucial to prevent blindness. In this   paper, a Computer Aided Diagnosis System was developed to analyze digital RGB   color fundus images for three features of Non-Proliferative Diabetic   Retinopathy: blood vessels, microaneurysms and hard exudates and two features   of the image: homogeneity and entropy. The system proposed demonstrated a   classification accuracy of 92%, sensitivity and specificity of 95%. These   results revealed that the CADx System proposed can help the ophthalmologist to   detect Diabetic Retinopathy at the early stages, or provide a second opinion on   the ophthalmologist to provide a better accurate diagnosis.</p>     <p>Finally, the accuracy of the system could   be further improved using more input features, taking more retinal images under   uniform lighting conditions and applying more robust algorithms such as Fuzzy   Logic, High Order Spectra, Watershed Transformation, among others.</p>     <p><b><font size="3">Acknowledgments</font></b></p>     <p>This work is supported by the Instituto Polit&eacute;cnico   Nacional de M&eacute;xico (IPN) and CONACyT.</p>     <p><b><font size="3">References</font></b></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>1. Organizaci&oacute;n Mundial de la Salud. <i>Diabetes. Nota descriptiva N.&deg; 312</i>. Available on: <a href="http://www.who.int/mediacentre/factsheets/fs312/es/index.html" target="_blank">http://www.who.int/mediacentre/factsheets/fs312/es/index.html</a> Accessed: October 9, 2014.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000103&pid=S0120-6230201500010000700001&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>2. R. Frank. ''Diabetic Retinopathy''. <i>The New England Journal of Medicine</i>. Vol. 350. 2004. pp. 48-58.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000105&pid=S0120-6230201500010000700002&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>3. D. Fong, L. Aiello, T. Gardner, G. King, G.   Blankenship, et al. ''Retinopathy in Diabetes''. <i>Diabetes Care.</i> Vol. 27. 2004. pp. 584-587.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000107&pid=S0120-6230201500010000700003&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --> </p>     <!-- ref --><p>4. D. Browning. <i>Diabetic Retinopathy: Evidence Based Management. </i>1<i><sup>st</sup></i> ed. Ed. Springer. New   York, USA. 2010. pp. 31-61.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000109&pid=S0120-6230201500010000700004&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>5. L. Verma, G. Prakash, H. Tewari. ''Diabetic   Retinopathy: Time for Action. No complacency please''. <i>Articles from Bulletin of the World Health Organization</i>. Vol. 80.   2002. pp. 419-420.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000111&pid=S0120-6230201500010000700005&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>6. S. Le, E. Lee, R. Kingsley, Y. Wan, D.   Russell, R. Klein, A. Warn. ''Comparison of Diagnosis of Early Retinal Lesions   of Diabetic Retinopathy Between a Computer System and Human Experts''. <i>Arch. Ophthalmol.</i> Vol. 119. 2001. pp.   509-515.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000113&pid=S0120-6230201500010000700006&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>7. M. El-Bab, N.   Shawky, A. Al-Sisi, M. Akhtar. ''Retinopathy   and risk factors in diabetic patients from Al-Madinah Al-Munawarah in the   Kingdom of Saudi Arabia''. <i>Clinical   Ophthalmology. </i>Vol. 6. 2012. pp. 269-276.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000115&pid=S0120-6230201500010000700007&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>8. A. Khurana. <i>Comprehensive Ophthalmology</i>. 4<i><sup>th</sup></i>ed. Ed. New Age International (P) Ltd., Publishers. New Delhi, India. 2007.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000117&pid=S0120-6230201500010000700008&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>9. A. Fleming, S. Philip, K. Goatman, J.   Olson, P. Sharp. ''Automated microaneurysm detection using local contrast   normalization and local vessel detection''. <i>IEEE   Transactions in Medical Imaging.</i> Vol. 25. 2006. pp. 1223-1232.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000119&pid=S0120-6230201500010000700009&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>10. T. Walter, J.   Klein, P. Massin, A. Erginay. ''A contribution of image processing to the   diagnosis of diabetic retinopathy - detection of exudates in color fundus   images of the human retina''. <i>IEEE   Transactions on Medical Imaging</i>. Vol. 21. 2002. pp. 1236-1243.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000121&pid=S0120-6230201500010000700010&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>11. M. Giger, N.   Karssemeijer, S. Armato. ''Computer-Aided Diagnosis in Medical   Imaging''. <i>IEEE Trans. Med. Imag. </i>Vol.   20. 2001. pp. 1205-1208.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000123&pid=S0120-6230201500010000700011&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>12. G. Gardner, D.   Keating, T. Williamson, A. Elliot. ''Automatic detection of diabetic retinopathy   using an artificial neural network: a screening tool''. <i>British Journal of Ophthalmology</i>. Vol. 80. 1996. pp. 940-944.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000125&pid=S0120-6230201500010000700012&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>13. E. Chaum, T.   Karnowski, V. Govindasamy, M. Abdelrahman, K. Tobin. ''Automated Diagnosis of   Retinopathy by content-based image retrieval''. <i>The Journal of Retinal and Vitreous Diseases</i>. Vol. 28. 2008. pp.   1463-1477.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000127&pid=S0120-6230201500010000700013&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>14. J. Anitha, D.   Selvathi, D. Hemanth. <i>Neural Computing   Based Abnormality Detection in Retinal Optical Images. </i>Proceedings of the IEEE   International Advance Computing Conference (IACC).  Patiala, India, 2009. pp. 630-635.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000129&pid=S0120-6230201500010000700014&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>15. N. Singh, R. Chandra.   ''Automated Early Detection of Diabetic Retinopathy Using Image Analysis   Techniques''. <i>International Journal of   Computer Applications</i>. Vol. 8. 2010. pp. 18- 23.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000131&pid=S0120-6230201500010000700015&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>16. D. Selvathi, N. Prakash,   N. Balagopal. ''Automated Detection of Diabetic Retinopathy for Early Diagnosis   using Feature Extraction and Support Vector Machine''. <i>International Journal of Emerging Technology and Advanced Engineering</i>.   Vol. 2. 2013. pp. 103-108.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000133&pid=S0120-6230201500010000700016&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>17. R. Radha, B.   Lakshman. ''Retinal Image Analysis Using Morphological Process and Clustering   Technique''. <i>Signal and Image Processing:   An International Journal (SIPIJ).</i> Vol. 4. 2013. pp. 55-69.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000135&pid=S0120-6230201500010000700017&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>18. C. Stellingwerf,   P. Hardus, J. Hooymans. ''Assessing Diabetic Retinopathy using two-field digital   photography and the influence of JPEG-compression''. <i>Documenta Ophthalmologia.</i> Vol. 108. 2004. pp. 203-209.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000137&pid=S0120-6230201500010000700018&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>19. MESSIDOR, TECHNO-VISION. <i>MESSIDOR: methods to evaluate   segmentation and indexing techniques in the field of retinal ophthalmology. </i>2014. Available on: <a href="http://messidor.crihan.fr/index-en.php" target="_blank">http://messidor.crihan.fr/index-en.php</a> Accessed: October 9, 2014.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000139&pid=S0120-6230201500010000700019&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>20. R. Gonzalez, R.   Woods, S. Eddins. <i>Digital Image   Processing using MATLAB</i>. 1<i><sup>st</sup></i> ed. Ed. Gatesmark Publishing. Knoxville, USA. 2009. pp. 334-358.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000141&pid=S0120-6230201500010000700020&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>21. P. Qiu. <i>Image Processing and Jump Regression   Analysis</i>. 1<i><sup>st</sup></i>ed. Ed. John   Wiley &amp; Sons, Inc. New Jersey, USA. 2005. pp. 2-205.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000143&pid=S0120-6230201500010000700021&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>22. J. Nayak, P. Bhat,   R. Acharya, C. Lim, M. Kagathi. ''Automated Identification of Diabetic   Retinopathy Stages Using Digital Fundus Images''. <i>J. Med. </i>Syst. Vol. 32. 2008. pp. 107-115.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000145&pid=S0120-6230201500010000700022&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>23. N. Otsu. ''A   Threshold Selection Method from Gray-Level Histograms''. <i>IEEE Transactions on Systems, Man and Cybernetics.</i> Vol. 9. 1979.   pp. 62-66.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000147&pid=S0120-6230201500010000700023&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>24. F. Cui, L. Zou, B.   Song. <i>Edge Feature Extraction   Based on Digital Image Processing Techniques</i>. Proceedings of   the IEEE Int. Conference on Automation and Logistics. Qingdao, China. 2008. pp.   2320-2324.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000149&pid=S0120-6230201500010000700024&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>25. J. Canny. ''A   Computational Approach to Edge Detection''. <i>IEEE   Transactions on Pattern Analysis and Machine Intelligence</i>. Vol. 8. 1986. pp.   679-698.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000151&pid=S0120-6230201500010000700025&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>26. L. Boroczky, P.   Cremonesi, N. Scarabottolo. <i>Texture Analysis   for Image Processing on General-Purpose Parallel Machines</i>. Proceedings of   the International Symposium on Parallel Architectures, Algorithms and Networks.   Budapest, Hungary. 1994. pp. 17-24.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000153&pid=S0120-6230201500010000700026&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>27. Centre for   Innovation in Mathematics Teaching (CIMT). <i>Analysis   of variance (Anova)</i>. Available on: <a href="http://www.cimt.plymouth.ac.uk/projects/mepres/alevel/fstats_ch7.pdf" target="_blank">http://www.cimt.plymouth.ac.uk/projects/mepres/alevel/fstats_ch7.pdf</a>, Accessed: November 9, 2014.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000155&pid=S0120-6230201500010000700027&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>28. A. Akobeng.   ''Understanding diagnostic test 1: sensitivity, specificity and predictive   values''. <i>Foundation Acta P&aelig;diatrica/Acta   P&aelig;diatrica</i>. Vol. 96. 2006. pp. 338-341.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000157&pid=S0120-6230201500010000700028&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>      <p>&nbsp;</p>     <p>&nbsp;</p>     <p>&nbsp;</p>     <p>&nbsp;</p>     ]]></body>
<body><![CDATA[<p>&nbsp;</p>     <p>&nbsp;</p>     <p>&nbsp;</p>     <p>&nbsp;</p> </font>      ]]></body><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="">
<collab>Organización Mundial de la Salud</collab>
<source><![CDATA[Diabetes. Nota descriptiva]]></source>
<year></year>
<volume>312</volume>
</nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Frank]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Diabetic Retinopathy]]></article-title>
<source><![CDATA[The New England Journal of Medicine]]></source>
<year>2004</year>
<volume>350</volume>
<page-range>48-58</page-range></nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Fong]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Aiello]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Gardner]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[King]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Blankenship, et al]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Retinopathy in Diabetes]]></article-title>
<source><![CDATA[Diabetes Care]]></source>
<year>2004</year>
<volume>27</volume>
<page-range>584-587</page-range></nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Browning]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<source><![CDATA[Diabetic Retinopathy: Evidence Based Management]]></source>
<year>2010</year>
<page-range>31-61</page-range><publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[Ed. Springer]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Verma]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Prakash]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Tewari]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Diabetic Retinopathy: Time for Action. No complacency please]]></article-title>
<source><![CDATA[Articles from Bulletin of the World Health Organization]]></source>
<year>2002</year>
<volume>80</volume>
<page-range>419-420</page-range></nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Le]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Lee]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
<name>
<surname><![CDATA[Kingsley]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Wan]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
<name>
<surname><![CDATA[Russell]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Klein]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Warn]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Comparison of Diagnosis of Early Retinal Lesions of Diabetic Retinopathy Between a Computer System and Human Experts]]></article-title>
<source><![CDATA[Arch. Ophthalmol]]></source>
<year>2001</year>
<volume>119</volume>
<page-range>509-515</page-range></nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[El-Bab]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Shawky]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Al-Sisi]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Akhtar]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Retinopathy and risk factors in diabetic patients from Al-Madinah Al-Munawarah in the Kingdom of Saudi Arabia]]></article-title>
<source><![CDATA[Clinical Ophthalmology]]></source>
<year>2012</year>
<volume>6</volume>
<page-range>269-276</page-range></nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Khurana]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<source><![CDATA[Comprehensive Ophthalmology]]></source>
<year>2007</year>
<publisher-loc><![CDATA[New DelhiIndia ]]></publisher-loc>
<publisher-name><![CDATA[New Age International (P) Ltd., Publishers]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Fleming]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Philip]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Goatman]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
<name>
<surname><![CDATA[Olson]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Sharp]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automated microaneurysm detection using local contrast normalization and local vessel detection]]></article-title>
<source><![CDATA[IEEE Transactions in Medical Imaging]]></source>
<year>2006</year>
<volume>25</volume>
<page-range>1223-1232</page-range></nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Walter]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Klein]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Massin]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Erginay]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A contribution of image processing to the diagnosis of diabetic retinopathy - detection of exudates in color fundus images of the human retina]]></article-title>
<source><![CDATA[IEEE Transactions on Medical Imaging]]></source>
<year>2002</year>
<volume>21</volume>
<page-range>1236-1243</page-range></nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Giger]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Karssemeijer]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Armato]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Computer-Aided Diagnosis in Medical Imaging]]></article-title>
<source><![CDATA[IEEE Trans. Med. Imag]]></source>
<year>2001</year>
<volume>20</volume>
<page-range>1205-1208</page-range></nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gardner]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Keating]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Williamson]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Elliot]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool]]></article-title>
<source><![CDATA[British Journal of Ophthalmology]]></source>
<year>1996</year>
<volume>80</volume>
<page-range>940-944</page-range></nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Chaum]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
<name>
<surname><![CDATA[Karnowski]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Govindasamy]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
<name>
<surname><![CDATA[Abdelrahman]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Tobin]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automated Diagnosis of Retinopathy by content-based image retrieval]]></article-title>
<source><![CDATA[The Journal of Retinal and Vitreous Diseases]]></source>
<year>2008</year>
<volume>28</volume>
<page-range>1463-1477</page-range></nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Anitha]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Selvathi]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Hemanth]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<source><![CDATA[Neural Computing Based Abnormality Detection in Retinal Optical Images]]></source>
<year>2009</year>
<publisher-loc><![CDATA[Patiala ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Singh]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Chandra]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automated Early Detection of Diabetic Retinopathy Using Image Analysis Techniques]]></article-title>
<source><![CDATA[International Journal of Computer Applications]]></source>
<year>2010</year>
<volume>8</volume>
<page-range>18- 23</page-range></nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Selvathi]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Prakash]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Balagopal]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automated Detection of Diabetic Retinopathy for Early Diagnosis using Feature Extraction and Support Vector Machine]]></article-title>
<source><![CDATA[International Journal of Emerging Technology and Advanced Engineering]]></source>
<year>2013</year>
<volume>2</volume>
<page-range>103-108</page-range></nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Radha]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Lakshman]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Retinal Image Analysis Using Morphological Process and Clustering Technique]]></article-title>
<source><![CDATA[Signal and Image Processing: An International Journal (SIPIJ)]]></source>
<year>2013</year>
<volume>4</volume>
<page-range>55-69</page-range></nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Stellingwerf]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[Hardus]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Hooymans]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Assessing Diabetic Retinopathy using two-field digital photography and the influence of JPEG-compression]]></article-title>
<source><![CDATA[Documenta Ophthalmologia]]></source>
<year>2004</year>
<volume>108</volume>
<page-range>203-209</page-range></nlm-citation>
</ref>
<ref id="B19">
<label>19</label><nlm-citation citation-type="">
<collab>MESSIDOR, TECHNO-VISION</collab>
<source><![CDATA[MESSIDOR: methods to evaluate segmentation and indexing techniques in the field of retinal ophthalmology]]></source>
<year>2014</year>
</nlm-citation>
</ref>
<ref id="B20">
<label>20</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gonzalez]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Woods]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Eddins]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[Digital Image Processing using MATLAB]]></source>
<year>2009</year>
<edition>1st</edition>
<page-range>334-358</page-range><publisher-loc><![CDATA[Knoxville ]]></publisher-loc>
<publisher-name><![CDATA[Ed. Gatesmark Publishing]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B21">
<label>21</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Qiu]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<source><![CDATA[Image Processing and Jump Regression Analysis]]></source>
<year>2005</year>
<edition>1st</edition>
<page-range>2-205</page-range><publisher-loc><![CDATA[New Jersey ]]></publisher-loc>
<publisher-name><![CDATA[Ed. John Wiley & Sons, Inc]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B22">
<label>22</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Nayak]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Bhat]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Acharya]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Lim]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[Kagathi]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automated Identification of Diabetic Retinopathy Stages Using Digital Fundus Images]]></article-title>
<source><![CDATA[J. Med. Syst]]></source>
<year>2008</year>
<volume>32</volume>
<page-range>107-115</page-range></nlm-citation>
</ref>
<ref id="B23">
<label>23</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Otsu]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A Threshold Selection Method from Gray-Level Histograms]]></article-title>
<source><![CDATA[IEEE Transactions on Systems, Man and Cybernetics]]></source>
<year>1979</year>
<volume>9</volume>
<page-range>62-66</page-range></nlm-citation>
</ref>
<ref id="B24">
<label>24</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cui]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Zou]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Song]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
</person-group>
<source><![CDATA[Edge Feature Extraction Based on Digital Image Processing Techniques]]></source>
<year>2008</year>
<publisher-loc><![CDATA[Qingdao ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B25">
<label>25</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Canny]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A Computational Approach to Edge Detection]]></article-title>
<source><![CDATA[IEEE Transactions on Pattern Analysis and Machine Intelligence]]></source>
<year>1986</year>
<volume>8</volume>
<page-range>679-698</page-range></nlm-citation>
</ref>
<ref id="B26">
<label>26</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Boroczky]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Cremonesi]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Scarabottolo]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
</person-group>
<source><![CDATA[Texture Analysis for Image Processing on General-Purpose Parallel Machines]]></source>
<year>1994</year>
<publisher-loc><![CDATA[Budapest ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B27">
<label>27</label><nlm-citation citation-type="">
<collab>Centre for Innovation in Mathematics Teaching (CIMT)</collab>
<source><![CDATA[Analysis of variance (Anova)]]></source>
<year></year>
</nlm-citation>
</ref>
<ref id="B28">
<label>28</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Akobeng]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Understanding diagnostic test 1: sensitivity, specificity and predictive values]]></article-title>
<source><![CDATA[Foundation Acta Pædiatrica/Acta Pædiatrica]]></source>
<year>2006</year>
<volume>96</volume>
<page-range>338-341</page-range></nlm-citation>
</ref>
</ref-list>
</back>
</article>
