748 research outputs found

    Breast Cancer Classification using Deep Learned Features Boosted with Handcrafted Features

    Full text link
    Breast cancer is one of the leading causes of death among women across the globe. It is difficult to treat if detected at advanced stages, however, early detection can significantly increase chances of survival and improves lives of millions of women. Given the widespread prevalence of breast cancer, it is of utmost importance for the research community to come up with the framework for early detection, classification and diagnosis. Artificial intelligence research community in coordination with medical practitioners are developing such frameworks to automate the task of detection. With the surge in research activities coupled with availability of large datasets and enhanced computational powers, it expected that AI framework results will help even more clinicians in making correct predictions. In this article, a novel framework for classification of breast cancer using mammograms is proposed. The proposed framework combines robust features extracted from novel Convolutional Neural Network (CNN) features with handcrafted features including HOG (Histogram of Oriented Gradients) and LBP (Local Binary Pattern). The obtained results on CBIS-DDSM dataset exceed state of the art

    Three-dimensional model-based human detection in crowded scenes

    Get PDF
    In this paper, the problem of human detection in crowded scenes is formulated as a maximum a posteriori problem, in which, given a set of candidates, predefined 3-D human shape models are matched with image evidence, provided by foreground extraction and probability of boundary, to estimate the human configuration. The optimal solution is obtained by decomposing the mutually related candidates into unoccluded and occluded ones in each iteration according to a graph description of the candidate relations and then only matching models for the unoccluded candidates. A candidate validation and rejection process based on minimum description length and local occlusion reasoning is carried out after each iteration of model matching. The advantage of the proposed optimization procedure is that its computational cost is much smaller than that of global optimization methods, while its performance is comparable to them. The proposed method achieves a detection rate of about 2% higher on a subset of images of the Caviar data set than the best result reported by previous works. We also demonstrate the performance of the proposed method using another challenging data set. © 2011 IEEE.published_or_final_versio

    Geological Object Recognition in Extraterrestrial Environments

    Get PDF
    On July 4 1997, the landing of NASA’s Pathnder probe and its rover Sojourner marked the beginning of a new era in space exploration; robots with the ability to move have made up the vanguard of human extraterrestrial exploration ever since. With Sojourners landing, for the rst time, a ground traversing robot was at a distance too far from earth to make direct human control practical. This has given rise to the development of autonomous systems to improve the e?ciency of these robots,in both their ability to move,and their ability to make decisions regarding their environment. Computer Vision comprises a large part of these autonomous systems, and in the course of performing these tasks a large number of images are taken for the purpose of navigation. The limited nature of the current Deep Space Network means that a majority of these images are never seen by human eyes. This work explores the possibility of using these images to target certain features by using a combination of three AdaBoost algorithms and established image feature approaches to help prioritize interesting subjects from an ever growing data set of imaging data

    Combining Machine Learning with Computer Vision for Precision Agriculture Applications

    Get PDF
    University of Minnesota Ph.D. dissertation. April 2018. Major: Computer Science. Advisor: Nikolaos Papanikolopoulos. 1 computer file (PDF); x, 93 pages.Financial and social elements of modern societies are closely connected to the cultivation of corn. Due to its massive production, deficiencies during the cultivation process directly translate to major financial losses. Existing field monitoring solutions utilize aerial and ground means towards identifying sectors of the farmland presenting under-performing crops. Nevertheless, an inference element is still absent; that is the automated diagnose of the cause and severity of the deficiency. The early detection and treatment of crops deficiencies and the frequent evaluation of their growth status are thus tasks of great significance. Towards an automated health condition assessment, this thesis introduces schemes for the computation of plant health indices. First, we propose a methodology to detect nitrogen (N) deficiencies in corn fields and assess their severity at an early stage using low-cost RGB sensors. The introduced methodology is twofold. First, a low complexity recommendation scheme identifies candidate plants exhibiting nitrogen deficiency and second, a detection elimination step completes the inference loop by deciding which of the candidate plants are actually exhibiting that condition. Experimental results on a diverse real-world dataset achieve a 90.6% accuracy for the detection of N-deficient regions and support the extension of this methodology to other crops and deficiencies that show similar visual characteristics. Second, based on the 3D reconstruction of small batches of corn plants at growth stages between ''V3'' and ''V6'', an automated alternative to existing manual and cumbersome phenotype estimation methodologies is presented. The use of 3D models provides an elevated information content, when compared to planar methods, mainly due to the alleviation of leaf occlusions. High-resolution images of corn stalks are collected and used to obtain 3D models of plants of interest. Based on the extracted 3D point clouds, the calculation of a plethora of phenotypic characteristics for each 3D reconstruction are obtained such as the number of plants depicted with 88.1% accuracy, Leaf Area Index (LAI) with 92.48% accuracy, the height with 89.2% accuracy, the leaf length with 74.8% accuracy, and the location and the angles of leaves with respect to the stem. The last two variables are connected by showing the trend of the angles to change with respect to the leaf position on the stem as the crops grow. An experimental validation using both artificially made corn plants emulating real-world scenarios and real corn plants in different growth stages supports the efficacy of the proposed methodology. Although the proposed methodologies are agnostic to the platform that performs the data collection, for the presented experiments a MikroKopter Okto XL equipped with a Nikon D7200 RGB sensor and a DJI Matrice 100 with a Zenmuse X3 and a Zenmuze Z3 RGB high-resolution cameras were used. The flight altitude ranged between 6 and 15 m and the resolution of the images varies within a range of 0.2 to 0.47 cm/pixel. Thorough data collection and interpretation leads to a better understanding of the needs not only of the farm as a whole but to each individual plant providing a much higher granularity to potential treatment strategies. Through the thoughtful utilization of modern computer vision techniques, it is possible to achieve positive financial and environmental results for these tasks. The conclusions of this work, suggest a fully automated scheme for information gathering in modern farms capable of replacing current labor-intensive procedures, thus greatly impacting the timely detection of crop deficiencies
    • …
    corecore