749 research outputs found

    Extraction of Key-Frames from an Unstable Video Feed

    Get PDF
    The APOLI project deals with Automated Power Line Inspection using Highly-automated Unmanned Aerial Systems. Beside the Real-time damage assessment by on-board high-resolution image data exploitation a postprocessing of the video data is necessary. This Master Thesis deals with the implementation of an Isolator Detector Framework and a Work ow in the Automotive Data and Time-triggered Framework(ADTF) that loads a video direct from a camera or from a storage and extracts the Key Frames which contain objects of interest. This is done by the implementation of an object detection system using C++ and the creation of ADTF Filters that perform the task of detection of the objects of interest and extract the Key Frames using a supervised learning platform. The use case is the extraction of frames from video samples that contain Images of Isolators from Power Transmission Lines

    Pixel N-grams for Mammographic Image Classification

    Get PDF
    X-ray screening for breast cancer is an important public health initiative in the management of a leading cause of death for women. However, screening is expensive if mammograms are required to be manually assessed by radiologists. Moreover, manual screening is subject to perception and interpretation errors. Computer aided detection/diagnosis (CAD) systems can help radiologists as computer algorithms are good at performing image analysis consistently and repetitively. However, image features that enhance CAD classification accuracies are necessary for CAD systems to be deployed. Many CAD systems have been developed but the specificity and sensitivity is not high; in part because of challenges inherent in identifying effective features to be initially extracted from raw images. Existing feature extraction techniques can be grouped under three main approaches; statistical, spectral and structural. Statistical and spectral techniques provide global image features but often fail to distinguish between local pattern variations within an image. On the other hand, structural approach have given rise to the Bag-of-Visual-Words (BoVW) model, which captures local variations in an image, but typically do not consider spatial relationships between the visual “words”. Moreover, statistical features and features based on BoVW models are computationally very expensive. Similarly, structural feature computation methods other than BoVW are also computationally expensive and strongly dependent upon algorithms that can segment an image to localize a region of interest likely to contain the tumour. Thus, classification algorithms using structural features require high resource computers. In order for a radiologist to classify the lesions on low resource computers such as Ipads, Tablets, and Mobile phones, in a remote location, it is necessary to develop computationally inexpensive classification algorithms. Therefore, the overarching aim of this research is to discover a feature extraction/image representation model which can be used to classify mammographic lesions with high accuracy, sensitivity and specificity along with low computational cost. For this purpose a novel feature extraction technique called ‘Pixel N-grams’ is proposed. The Pixel N-grams approach is inspired from the character N-gram concept in text categorization. Here, N number of consecutive pixel intensities are considered in a particular direction. The image is then represented with the help of histogram of occurrences of the Pixel N-grams in an image. Shape and texture of mammographic lesions play an important role in determining the malignancy of the lesion. It was hypothesized that the Pixel N-grams would be able to distinguish between various textures and shapes. Experiments carried out on benchmark texture databases and binary basic shapes database have demonstrated that the hypothesis was correct. Moreover, the Pixel N-grams were able to distinguish between various shapes irrespective of size and location of shape in an image. The efficacy of the Pixel N-gram technique was tested on mammographic database of primary digital mammograms sourced from a radiological facility in Australia (LakeImaging Pty Ltd) and secondary digital mammograms (benchmark miniMIAS database). A senior radiologist from LakeImaging provided real time de-identified high resolution mammogram images with annotated regions of interests (which were used as groundtruth), and valuable radiological diagnostic knowledge. Two types of classifications were observed on these two datasets. Normal/abnormal classification useful for automated screening and circumscribed/speculation/normal classification useful for automated diagnosis of breast cancer. The classification results on both the mammography datasets using Pixel N-grams were promising. Classification performance (Fscore, sensitivity and specificity) using Pixel N-gram technique was observed to be significantly better than the existing techniques such as intensity histogram, co-occurrence matrix based features and comparable with the BoVW features. Further, Pixel N-gram features are found to be computationally less complex than the co-occurrence matrix based features as well as BoVW features paving the way for mammogram classification on low resource computers. Although, the Pixel N-gram technique was designed for mammographic classification, it could be applied to other image classification applications such as diabetic retinopathy, histopathological image classification, lung tumour detection using CT images, brain tumour detection using MRI images, wound image classification and tooth decay classification using dentistry x-ray images. Further, texture and shape classification is also useful for classification of real world images outside the medical domain. Therefore, the pixel N-gram technique could be extended for applications such as classification of satellite imagery and other object detection tasks.Doctor of Philosoph

    Automatic coarse co-registration of point clouds from diverse scan geometries: a test of detectors and descriptors

    Full text link
    Point clouds are collected nowadays from a plethora of sensors, some having higher accuracies and higher costs, some having lower accuracies but also lower costs. Not only there is a large choice for different sensors, but also these can be transported by different platforms, which can provide different scan geometries. In this work we test the extraction of four different keypoint detectors and three feature descriptors. We benchmark performance in terms of calculation time and we assess their performance in terms of accuracy in their ability in coarse automatic co-registration of two clouds that are collected with different sensors, platforms and scan geometries. One, which we define as having the higher accuracy, and thus will be used as reference, was surveyed via a UAV flight with a Riegl MiniVUX-3, the other on a bicycle with a Livox Horizon over a walking path with un-even ground.The novelty in this work consists in comparing several strategies for fast alignment of point clouds from very different surveying geometries, as the drone has a bird's eye view and the bicycle a ground-based view. An added challenge is related to the lower cost of the bicycle sensor ensemble that, together with the rough terrain, reasonably results in lower accuracy of the survey. The main idea is to use range images to capture a simplified version of the geometry of the surveyed area and then find the best features to match keypoints. Results show that NARF features detected more keypoints and resulted in a faster co-registration procedure in this scenariowhereas the accuracy of the co-registration is similar to all the combinations of keypoint detectors and features

    Video-based Smoke Detection Algorithms: A Chronological Survey

    Get PDF
    Over the past decade, several vision-based algorithms proposed in literature have resulted into development of a large number of techniques for detection of smoke and fire from video images. Video-based smoke detection approaches are becoming practical alternatives to the conventional fire detection methods due to their numerous advantages such as early fire detection, fast response, non-contact, absence of spatial limits, ability to provide live video that conveys fire progress information, and capability to provide forensic evidence for fire investigations. This paper provides a chronological survey of different video-based smoke detection methods that are available in literatures from 1998 to 2014.Though the paper is not aimed at performing comparative analysis of the surveyed methods, perceived strengths and weakness of the different methods are identified as this will be useful for future research in video-based smoke or fire detection. Keywords: Early fire detection, video-based smoke detection, algorithms, computer vision, image processing

    Local Histograms for Per-Pixel Classification

    Get PDF
    We introduce a rigorous mathematical theory for the analysis of local histograms, and study how they interact with textures that can be modeled as occlusions of simpler components. We first show how local histograms can be computed as a system of convolutions and discuss some basic local histogram properties. We then introduce a probabilistic, occlusion-based model for textures and formally demonstrate that local histogram transforms are natural tools for analyzing the textures produced by our model. Next, we characterize all nonlinear transforms which satisfy the three key properties of local histograms and consider the appropriateness of local histogram features in the automated classification of textures commonly encountered in histological images. We discuss how local histogram transforms can be used to produce numerical features that, when fed into mainstream classification schemes, mimic the baser aspects of a pathologist\u27s thought process

    Automatic Alignment of 3D Multi-Sensor Point Clouds

    Get PDF
    Automatic 3D point cloud alignment is a major research topic in photogrammetry, computer vision and computer graphics. In this research, two keypoint feature matching approaches have been developed and proposed for the automatic alignment of 3D point clouds, which have been acquired from different sensor platforms and are in different 3D conformal coordinate systems. The first proposed approach is based on 3D keypoint feature matching. First, surface curvature information is utilized for scale-invariant 3D keypoint extraction. Adaptive non-maxima suppression (ANMS) is then applied to retain the most distinct and well-distributed set of keypoints. Afterwards, every keypoint is characterized by a scale, rotation and translation invariant 3D surface descriptor, called the radial geodesic distance-slope histogram. Similar keypoints descriptors on the source and target datasets are then matched using bipartite graph matching, followed by a modified-RANSAC for outlier removal. The second proposed method is based on 2D keypoint matching performed on height map images of the 3D point clouds. Height map images are generated by projecting the 3D point clouds onto a planimetric plane. Afterwards, a multi-scale wavelet 2D keypoint detector with ANMS is proposed to extract keypoints on the height maps. Then, a scale, rotation and translation-invariant 2D descriptor referred to as the Gabor, Log-Polar-Rapid Transform descriptor is computed for all keypoints. Finally, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour matching, together with the modified-RANSAC for outlier removal. Each method is assessed on multi-sensor, urban and non-urban 3D point cloud datasets. Results show that unlike the 3D-based method, the height map-based approach is able to align source and target datasets with differences in point density, point distribution and missing point data. Findings also show that the 3D-based method obtained lower transformation errors and a greater number of correspondences when the source and target have similar point characteristics. The 3D-based approach attained absolute mean alignment differences in the range of 0.23m to 2.81m, whereas the height map approach had a range from 0.17m to 1.21m. These differences meet the proximity requirements of the data characteristics and the further application of fine co-registration approaches
    • 

    corecore