396,396 research outputs found

    Recognition of 3-D Objects from Multiple 2-D Views by a Self-Organizing Neural Architecture

    Full text link
    The recognition of 3-D objects from sequences of their 2-D views is modeled by a neural architecture, called VIEWNET that uses View Information Encoded With NETworks. VIEWNET illustrates how several types of noise and varialbility in image data can be progressively removed while incornplcte image features are restored and invariant features are discovered using an appropriately designed cascade of processing stages. VIEWNET first processes 2-D views of 3-D objects using the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and removes noise from the images. Boundary regularization and cornpletion are achieved by the same mechanisms that suppress image noise. A log-polar transform is taken with respect to the centroid of the resulting figure and then re-centered to achieve 2-D scale and rotation invariance. The invariant images are coarse coded to further reduce noise, reduce foreshortening effects, and increase generalization. These compressed codes are input into a supervised learning system based on the fuzzy ARTMAP algorithm. Recognition categories of 2-D views are learned before evidence from sequences of 2-D view categories is accumulated to improve object recognition. Recognition is studied with noisy and clean images using slow and fast learning. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of 2-D views of jet aircraft with and without additive noise. A recognition rate of 90% is achieved with one 2-D view category and of 98.5% correct with three 2-D view categories.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-91-J-1309, N00014-91-J-4100, N00014-92-J-0499); Air Force Office of Scientific Research (F9620-92-J-0499, 90-0083

    Use of laser range finders and range image analysis in automated assembly tasks

    Get PDF
    A proposition to study the effect of filtering processes on range images and to evaluate the performance of two different laser range mappers is made. Median filtering was utilized to remove noise from the range images. First and second order derivatives are then utilized to locate the similarities and dissimilarities between the processed and the original images. Range depth information is converted into spatial coordinates, and a set of coefficients which describe 3-D objects is generated using the algorithm developed in the second phase of this research. Range images of spheres and cylinders are used for experimental purposes. An algorithm was developed to compare the performance of two different laser range mappers based upon the range depth information of surfaces generated by each of the mappers. Furthermore, an approach based on 2-D analytic geometry is also proposed which serves as a basis for the recognition of regular 3-D geometric objects

    A system that learns to recognize 3-D objects

    Get PDF
    A system that learns to recognize 3-D objects from single and multiple views is presented. It consists of three parts: a simulator of 3-D figures, a Learner, and a recognizer. The 3-D figure simulator generates and plots line drawings of certain 3-D objects. A series of transformations leads to a number of 2-D images of a 3-D object, which are considered as different views and are the basic input to the next two parts. The learner works in three stages using the method of Learning from examples. In the first stage an elementary-concept learner learns the basic entities that make up a line drawing. In the second stage a multiple-view learner learns the definitions of 3-D objects that are to be recognized from multiple views. In the third stage a single-view learner learns how to recognize the same objects from single views. The recognizer is presented with line drawings representing 3-D scenes. A single-view recognizer segments the input into faces of possible 3-D objects, and attempts to match the segmented scene with a set of single-view definitions of 3-D objects. The result of the recognition may include several alternative answers, corresponding to different 3-D objects. A unique answer can be obtained by making assumptions about hidden elements (e. g. faces) of an object and using a multiple-view recognizer. Both single-view and multiple-view recognition are based on the structural relations of the elements that make up a 3-D object. Some analytical elements (e. g. angles) of the objects are also calculated, in order to determine point containment and conveziti. The system performs well on polyhedra with triangular and quadrilateral faces. A discussion of the system's performance and suggestions for further development is given at the end. The simulator and the part of the recognizer that makes the analytical calculations are written in C. The learner and the rest of the recognizer are written in PROLOG

    Pose-invariant, model-based object recognition, using linear combination of views and Bayesian statistics

    Get PDF
    This thesis presents an in-depth study on the problem of object recognition, and in particular the detection of 3-D objects in 2-D intensity images which may be viewed from a variety of angles. A solution to this problem remains elusive to this day, since it involves dealing with variations in geometry, photometry and viewing angle, noise, occlusions and incomplete data. This work restricts its scope to a particular kind of extrinsic variation; variation of the image due to changes in the viewpoint from which the object is seen. A technique is proposed and developed to address this problem, which falls into the category of view-based approaches, that is, a method in which an object is represented as a collection of a small number of 2-D views, as opposed to a generation of a full 3-D model. This technique is based on the theoretical observation that the geometry of the set of possible images of an object undergoing 3-D rigid transformations and scaling may, under most imaging conditions, be represented by a linear combination of a small number of 2-D views of that object. It is therefore possible to synthesise a novel image of an object given at least two existing and dissimilar views of the object, and a set of linear coefficients that determine how these views are to be combined in order to synthesise the new image. The method works in conjunction with a powerful optimization algorithm, to search and recover the optimal linear combination coefficients that will synthesize a novel image, which is as similar as possible to the target, scene view. If the similarity between the synthesized and the target images is above some threshold, then an object is determined to be present in the scene and its location and pose are defined, in part, by the coefficients. The key benefits of using this technique is that because it works directly with pixel values, it avoids the need for problematic, low-level feature extraction and solution of the correspondence problem. As a result, a linear combination of views (LCV) model is easy to construct and use, since it only requires a small number of stored, 2-D views of the object in question, and the selection of a few landmark points on the object, the process which is easily carried out during the offline, model building stage. In addition, this method is general enough to be applied across a variety of recognition problems and different types of objects. The development and application of this method is initially explored looking at two-dimensional problems, and then extending the same principles to 3-D. Additionally, the method is evaluated across synthetic and real-image datasets, containing variations in the objects’ identity and pose. Future work on possible extensions to incorporate a foreground/background model and lighting variations of the pixels are examined

    Perception of rotating objects by pigeons (Columba livia)

    Get PDF
    FĂŒr mobile Tiere ist sowohl die FĂ€higkeit, dreidimensionale Objekte als solche wahrzunehmen, als auch das Vermögen, diese trotz Änderung des Blickpunktes, VerĂ€nderung der GrĂ¶ĂŸe und unterschiedlicher Beleuchtung wieder zu erkennen, von großer Bedeutung. Die Frage, ob ein Tier fĂ€hig ist, dreidimensionale Information aus rein zweidimensionalen Darstellungen zu erschließen (wie es zum Beispiel bei Bildern dreidimensionaler Objekte der Fall ist, die auf einem Computer-Monitor prĂ€sentiert werden), wurde in den letzten Jahrzehnten der Wahrnehmungsforschung zunehmend zu einem zentralen Thema. Es ist durchaus möglich, dass zum Beispiel Tauben (Columba livia) zweidimensionale Bilder dreidimensionaler Objekte eher als beliebige Ansammlungen zweidimensionaler Merkmale sehen als diese als generalisierte 3-D-ReprĂ€sentationen wahrzunehmen. Sollten Tauben aber tatsĂ€chlich fĂ€hig sein, objektartige ReprĂ€sentationen zweidimensionaler Projektionen zu bilden, sollte die „dynamische PrĂ€sentation“ (d.h., das schnelle Abbilden aufeinander folgender Objekt-Ansichten) das Wiedererkennen bei diversen Stimulusmodifikationen erleichtern, da dynamische, kontinuierliche VerĂ€nderung der Perspektive die Integration einzelner Objektansichten zu einem dreidimensionalen Bild fördern kann. Diese Hypothese wurde in der vorliegenden Diplomarbeit getestet. Dazu wurden Tauben zuerst mit Hilfe einer Go-/No-Go-Prozedur darauf trainiert, zwischen 2-D-Projektionen eines WĂŒrfels und einer Pyramide zu unterscheiden. Diese wurden entweder als statische Einzelbilder oder in Rotation um die y-Achse prĂ€sentiert. Nachdem sie die Diskriminierungssaufgabe erlernt hatten, wurden den Vögeln in einer Reihe von Generalisationstests neue, modifizierte, Projektionen gezeigt. Die Änderungen betrafen unterschiedliche Objektmerkmale sowie die Art der Rotation, z.B. die GrĂ¶ĂŸe, die OberflĂ€chen-FĂ€rbung, den Blickwinkel und die Reihenfolge der Einzelbilder einer dynamischen Sequenz. Die Ergebnisse zeigten, dass die meisten Arten von Transformationen das Wiedererkennen klar beeintrĂ€chtigten. Im Gegensatz zu einer Studie von COOK & KATZ (1999), die ein vergleichbares experimentelles Design verwendeten, fand ich weder Objektkonstanz ĂŒber verschiedene Reiztransformationen, noch Anzeichen fĂŒr einen „Dynamischen SuperioritĂ€tseffekt“, das heißt, dass die Diskriminierungsleistung bei dynamischer gegenĂŒber statischer PrĂ€sentation nicht verbessert war. Auch die Reihenfolge der Einzelbilder innerhalb einer dynamischen Sequenz schien fĂŒr die FĂ€higkeit zur Objektunterscheidung nicht von Bedeutung zu sein. Die FĂ€higkeit, ein Objekt zu erkennen, war stark blickpunktabhĂ€ngig und war bis zu einem gewissen Grad auch durch GrĂ¶ĂŸen- und FarbĂ€nderungen beeinflusst. Zusammengenommen legen die Ergebnisse den Schluss nahe, dass die Objektdiskriminierung auf gespeicherter zweidimensionaler Merkmalsinformation beruhte und nicht auf der Verwendung von dreidimensionalen ObjektreprĂ€sentationen. Sie bestĂ€tigen damit die Ansicht, dass das Wiedererkennen von Objekten von Mechanismen kontrolliert wird, die blickbasiert und nicht objektbasiert sind.Both perceiving the world as consisting of stable, unified, three-dimensional objects and recognising them despite changes in vantage point, size, and lighting conditions are fundamental abilities for all mobile animals. Whether an animal is able to retrieve 3-D information also from flat displays (e.g., 2-D projections of 3-D objects presented on a computer screen) has been a matter of interest in the last decades of research. For instance, pigeons (Columba livia) may perceive two-dimensional pictures of three-dimensional objects simply as random collections of flat, two-dimensional features instead of experiencing them as generalised 3-D representations. If, however, pigeons are indeed able to form object-like representations of two-dimensional displays, “dynamic presentation”, (i.e., presentation of views onto the object in rapid succession) should facilitate recognition across various stimulus modifications, since continuous dynamic change of perspective may help integrating individual views of an object into three-dimensional images. This hypothesis was tested in the current thesis. Pigeons were first trained in a go/no-go procedure to discriminate between 2-D projections of a cube and a pyramid, presented as static images or as rotating around the y-axis. When they had acquired the discrimination the birds were subjected to a series of transfer tests with new, modified, projections. These involved various featural and rotational transformations, such as novel size, altered surface colouration, novel viewpoint, and randomised rotation sequences. The results showed that most types of transformations clearly impaired recognition. In contrast to a study by COOK & KATZ (1999), who used a similar experimental design I could neither find object constancy across various stimulus transformations, nor any indication of a "dynamic superiority effect", i.e., discrimination performance was not improved by dynamic as compared to static presentation, and the order of images within a dynamic sequence was not crucial for object recognition. Furthermore, the ability to recognise an object was found to be strongly viewpoint-dependent and influenced also by modifications in size and colouration to some degree. Together, the results strongly suggest that object discrimination was based on stored 2-D featural information rather than on object-like 3-D representations. They are in line with the view that pigeons’ object recognition is controlled by view-based rather than object-based mechanisms

    Human factors in X-ray image inspection of passenger Baggage – Basic and applied perspectives

    Get PDF
    The X-ray image inspection of passenger baggage contributes substantially to aviation security and is best understood as a search and decision task: Trained security officers – so called screeners – search the images for threats among many harmless everyday objects, but the recognition of objects in X-ray images and therefore the decision between threats and harmless objects can be difficult. Because performance in this task depends on often difficult recognition, it is not clear to what extent basic research on visual search can be generalized to X-ray image inspection. Manuscript 1 of this thesis investigated whether X-ray image inspection and a traditional visual search task depend on the same visual-cognitive abilities. The results indicate that traditional visual search tasks and X-ray image inspection depend on different aspects of common visual-cognitive abilities. Another gap between basic research on visual search and applied research on X-ray image inspection is that the former is typically conducted with students and the latter with professional screeners. Therefore, these two populations were compared, revealing that professionals performed better in X-ray image inspection, but not the visual search task. However, there was no difference between students and professionals regarding the importance of the visual-cognitive abilities for either task. Because there is some freedom in the decision whether a suspicious object should be declared as a threat or as harmless, the results of X-ray image inspection in terms of hit and false alarm rate depend on the screeners’ response tendency. Manuscript 2 evaluated whether three commonly used detection measures – dâ€Č{d}', Aâ€Č{A}', and da{d}_{a} – are a valid representation of detection performance that is independent from response tendency. The results were consistently in favor of da with a slope parameter of around 0.6. In Manuscript 3 it was further shown that screeners can change their response tendency to increase the detection of novel threats. Also, screeners with a high ability to recognize everyday objects detected more novel threats when their response tendency was manipulated. The thesis further addressed changes that screeners face due to technological developments. Manuscript 4 showed that screeners can inspect X-ray images for one hour straight without a decrease in performance under conditions of remote cabin baggage screening, which means that X-ray image inspection is performed in a quiet room remote from the checkpoint. These screeners did not show a lower performance, but reported more distress, compared to screeners who took a 10 min break after every 20 min of screening. Manuscript 5 evaluated detection systems for cabin baggage screening (EDSCB). EDSCB only increased the detection of improvised explosive devices (IEDs) for inexperienced screeners if alarms by the EDSCB were indicated on the image and the screeners had to decide whether a threat was present or not. The detection of mere explosives, which lack the triggering device of IEDs, was only increased if the screeners could not decide against an alarm by the EDSCB. Manuscript 6 used discrete event simulation to evaluate how EDSCB impacts the throughput of passenger baggage screening. Throughput decreased with increasing false alarm rate of the EDSCB. However, fast alarm resolution processes and screeners with a low false alarm rate increased throughput. Taken together, the present findings contribute to understanding X-ray image inspection as a task with a search and decision component. The findings provide insights into basic aspects like the required visual-cognitive abilities and valid measures of detection performance, but also into applied research questions like for how long X-ray image inspection can be performed and how automation can assist with the detection of explosives

    Two new parallel processors for real time classification of 3-D moving objects and quad tree generation

    Get PDF
    Two related image processing problems are addressed in this thesis. First, the problem of identification of 3-D objects in real time is explored. An algorithm to solve this problem and a hardware system for parallel implementation of this algorithm are proposed. The classification scheme is based on the Invariant Numerical Shape Modeling (INSM) algorithm originally developed for 2-D pattern recognition such as alphanumeric characters. This algorithm is then extended to 3-D and is used for general 3-D object identification. The hardware system is an SIMD parallel processor, designed in bit slice fashion for expandability. It consists of a library of images coded according to the 3-D INSM algorithm and the SIMD classifier which compares the code of the unknown image to the library codes in a single clock pulse to establish its identity. The output of this system consists of three signals: U, for unique identification; M, for multiple identification; and N, for non-identification of the object. Second, the problem of real time image compaction is addressed. The quad tree data structure is described. Based on this structure, a parallel processor with a tree architecture is developed which is independent of the data entry process, i.e., data may be entered pixel by pixel or all at once. The hardware consists of a tree processor containing a tree generator and three separate memory arrays, a data transfer processor, and a main memory unit. The tree generator generates the quad tree of the input image in tabular form, using the memory arrays in the tree processor for storage of the table. This table can hold one picture frame at a given time. Hence, for processing multiple picture frames the data transfer processor is used to transfer their respective quad trees from the tree processor memory to the main memory. An algorithm is developed to facilitate the determination of the connections in the circuit

    Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images

    Full text link
    [EN] Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness through project DPI2016-77869 and GVA through project PROMETEO/2019/109Colomer, A.; Igual GarcĂ­a, J.; Naranjo Ornedo, V. (2020). Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors. 20(4):1-20. https://doi.org/10.3390/s20041005S120204World Report on Vision. Technical Report, 2019https://www.who.int/publications-detail/world-report-on-visionFong, D. S., Aiello, L., Gardner, T. W., King, G. L., Blankenship, G., Cavallerano, J. D., 
 Klein, R. (2003). Retinopathy in Diabetes. Diabetes Care, 27(Supplement 1), S84-S87. doi:10.2337/diacare.27.2007.s84COGAN, D. G. (1961). Retinal Vascular Patterns. Archives of Ophthalmology, 66(3), 366. doi:10.1001/archopht.1961.00960010368014Wilkinson, C. ., Ferris, F. L., Klein, R. E., Lee, P. P., Agardh, C. D., Davis, M., 
 Verdaguer, J. T. (2003). Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology, 110(9), 1677-1682. doi:10.1016/s0161-6420(03)00475-5Universal Eye Health: A Global Action Plan 2014–2019. Technical Reporthttps://www.who.int/blindness/actionplan/en/Salamat, N., Missen, M. M. S., & Rashid, A. (2019). Diabetic retinopathy techniques in retinal images: A review. Artificial Intelligence in Medicine, 97, 168-188. doi:10.1016/j.artmed.2018.10.009Qureshi, I., Ma, J., & Shaheed, K. (2019). A Hybrid Proposed Fundus Image Enhancement Framework for Diabetic Retinopathy. Algorithms, 12(1), 14. doi:10.3390/a12010014Morales, S., Engan, K., Naranjo, V., & Colomer, A. (2017). Retinal Disease Screening Through Local Binary Patterns. IEEE Journal of Biomedical and Health Informatics, 21(1), 184-192. doi:10.1109/jbhi.2015.2490798Asiri, N., Hussain, M., Al Adel, F., & Alzaidi, N. (2019). Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artificial Intelligence in Medicine, 99, 101701. doi:10.1016/j.artmed.2019.07.009Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., 
 Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. doi:10.1001/jama.2016.17216PrentaĆĄić, P., & Lončarić, S. (2016). Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Computer Methods and Programs in Biomedicine, 137, 281-292. doi:10.1016/j.cmpb.2016.09.018Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abramoff, M., Mendonca, A. M., & Campilho, A. (2018). End-to-End Adversarial Retinal Image Synthesis. IEEE Transactions on Medical Imaging, 37(3), 781-791. doi:10.1109/tmi.2017.2759102De la Torre, J., Valls, A., & Puig, D. (2020). A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing, 396, 465-476. doi:10.1016/j.neucom.2018.07.102Diaz-Pinto, A., Colomer, A., Naranjo, V., Morales, S., Xu, Y., & Frangi, A. F. (2019). Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 38(9), 2211-2218. doi:10.1109/tmi.2019.2903434Walter, T., Klein, J., Massin, P., & Erginay, A. (2002). A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Transactions on Medical Imaging, 21(10), 1236-1243. doi:10.1109/tmi.2002.806290Welfer, D., Scharcanski, J., & Marinho, D. R. (2010). A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Computerized Medical Imaging and Graphics, 34(3), 228-235. doi:10.1016/j.compmedimag.2009.10.001Mookiah, M. R. K., Acharya, U. R., Martis, R. J., Chua, C. K., Lim, C. M., Ng, E. Y. K., & Laude, A. (2013). Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach. Knowledge-Based Systems, 39, 9-22. doi:10.1016/j.knosys.2012.09.008Zhang, X., Thibault, G., DecenciĂšre, E., Marcotegui, B., LaĂż, B., Danno, R., 
 Erginay, A. (2014). Exudate detection in color retinal images for mass screening of diabetic retinopathy. Medical Image Analysis, 18(7), 1026-1043. doi:10.1016/j.media.2014.05.004Sopharak, A., Uyyanonvara, B., Barman, S., & Williamson, T. H. (2008). Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Computerized Medical Imaging and Graphics, 32(8), 720-727. doi:10.1016/j.compmedimag.2008.08.009Giancardo, L., Meriaudeau, F., Karnowski, T. P., Li, Y., Garg, S., Tobin, K. W., & Chaum, E. (2012). Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Medical Image Analysis, 16(1), 216-226. doi:10.1016/j.media.2011.07.004Amel, F., Mohammed, M., & Abdelhafid, B. (2012). Improvement of the Hard Exudates Detection Method Used For Computer- Aided Diagnosis of Diabetic Retinopathy. International Journal of Image, Graphics and Signal Processing, 4(4), 19-27. doi:10.5815/ijigsp.2012.04.03Usman Akram, M., Khalid, S., Tariq, A., Khan, S. A., & Azam, F. (2014). Detection and classification of retinal lesions for grading of diabetic retinopathy. Computers in Biology and Medicine, 45, 161-171. doi:10.1016/j.compbiomed.2013.11.014Akram, M. U., Tariq, A., Khan, S. A., & Javed, M. Y. (2014). Automated detection of exudates and macula for grading of diabetic macular edema. Computer Methods and Programs in Biomedicine, 114(2), 141-152. doi:10.1016/j.cmpb.2014.01.010Quellec, G., Lamard, M., AbrĂ moff, M. D., DecenciĂšre, E., Lay, B., Erginay, A., 
 Cazuguel, G. (2012). A multiple-instance learning framework for diabetic retinopathy screening. Medical Image Analysis, 16(6), 1228-1240. doi:10.1016/j.media.2012.06.003DecenciĂšre, E., Cazuguel, G., Zhang, X., Thibault, G., Klein, J.-C., Meyer, F., 
 Chabouis, A. (2013). TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM, 34(2), 196-203. doi:10.1016/j.irbm.2013.01.010AbrĂ moff, M. D., Folk, J. C., Han, D. P., Walker, J. D., Williams, D. F., Russell, S. R., 
 Niemeijer, M. (2013). Automated Analysis of Retinal Images for Detection of Referable Diabetic Retinopathy. JAMA Ophthalmology, 131(3), 351. doi:10.1001/jamaophthalmol.2013.1743Almotiri, J., Elleithy, K., & Elleithy, A. (2018). Retinal Vessels Segmentation Techniques and Algorithms: A Survey. Applied Sciences, 8(2), 155. doi:10.3390/app8020155Thakur, N., & Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. doi:10.1016/j.bspc.2018.01.014Bertalmio, M., Sapiro, G., Caselles, V., & Ballester, C. (2000). Image inpainting. Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’00. doi:10.1145/344779.344972Qureshi, M. A., Deriche, M., Beghdadi, A., & Amin, A. (2017). A critical survey of state-of-the-art image inpainting quality assessment metrics. Journal of Visual Communication and Image Representation, 49, 177-191. doi:10.1016/j.jvcir.2017.09.006Colomer, A., Naranjo, V., Engan, K., & Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication, 59, 73-82. doi:10.1016/j.image.2017.03.018Morales, S., Naranjo, V., Angulo, J., & Alcaniz, M. (2013). Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Transactions on Medical Imaging, 32(4), 786-796. doi:10.1109/tmi.2013.2238244Ojala, T., PietikĂ€inen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 29(1), 51-59. doi:10.1016/0031-3203(95)00067-4Ojala, T., Pietikainen, M., & Maenpaa, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971-987. doi:10.1109/tpami.2002.1017623Breiman, L. (2001). Machine Learning, 45(1), 5-32. doi:10.1023/a:1010933404324Chang, C.-C., & Lin, C.-J. (2011). LIBSVM. ACM Transactions on Intelligent Systems and Technology, 2(3), 1-27. doi:10.1145/1961189.1961199Tapia, S. L., Molina, R., & de la Blanca, N. P. (2016). Detection and localization of objects in Passive Millimeter Wave Images. 2016 24th European Signal Processing Conference (EUSIPCO). doi:10.1109/eusipco.2016.7760619Jin Huang, & Ling, C. X. (2005). Using AUC and accuracy in evaluating learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 17(3), 299-310. doi:10.1109/tkde.2005.50Prati, R. C., Batista, G. E. A. P. A., & Monard, M. C. (2011). A Survey on Graphical Methods for Classification Predictive Performance Evaluation. IEEE Transactions on Knowledge and Data Engineering, 23(11), 1601-1618. doi:10.1109/tkde.2011.59Mandrekar, J. N. (2010). Receiver Operating Characteristic Curve in Diagnostic Test Assessment. Journal of Thoracic Oncology, 5(9), 1315-1316. doi:10.1097/jto.0b013e3181ec173dRocha, A., Carvalho, T., Jelinek, H. F., Goldenstein, S., & Wainer, J. (2012). Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection. IEEE Transactions on Biomedical Engineering, 59(8), 2244-2253. doi:10.1109/tbme.2012.2201717JĂșnior, S. B., & Welfer, D. (2013). Automatic Detection of Microaneurysms and Hemorrhages in Color Eye Fundus Images. International Journal of Computer Science and Information Technology, 5(5), 21-37. doi:10.5121/ijcsit.2013.550
    • 

    corecore