328 research outputs found

    Network-based detection of malicious activities - a corporate network perspective

    Get PDF

    Dense, sonar-based reconstruction of underwater scenes

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mechanical Engineering at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution September 2019.Three-dimensional maps of underwater scenes are critical to—or the desired end product of—many applications, spanning a spectrum of spatial scales. Examples range from inspection of subsea infrastructure to hydrographic surveys of coastlines. Depending on the end use, maps will have different accuracy requirements. The accuracy of a mapping platform depends mainly on the individual accuracies of (i) its pose estimate in some global frame, (ii) the estimates of offsets between mapping sensors and platform, and (iii) the accuracy of the mapping sensor measurements. Typically, surface-based surveying platforms will employ highly accurate positioning sensors—e.g. a combination of differential global navigation satellite system (GNSS) receiver with an accurate attitude and heading reference system—to instrument the pose of a mapping sensor such as a multibeam sonar. For underwater platforms, the rapid attenuation of electromagnetic signals in water precludes the use of GNSS receivers at any meaningful depth. Acoustic positioning systems, the underwater analogues to GNSS, are limited to small survey areas and free of obstacles that may result in undesirable acoustic effects such as multi-path propagation and reverberation. Save for a few exceptions, the accuracy and update rate of these systems is significantly lower than that of differential GNSS. This performance reduction shifts the accuracy burden to inertial navigation systems (INS), often aided by Doppler velocity logs. Still, the pose estimates of an aided INS will incur in unbounded drift growth over time, often necessitating the use of techniques such as simultaneous localization and mapping (SLAM) to leverage local features to bound the uncertainty in the position estimate. The contributions presented in this dissertation aim at improving the accuracy of maps of underwater scenes produced from multibeam sonar data. First, we propose robust methods to process and segment sonar data to obtain accurate range measurements in the presence of noise, sensor artifacts, and outliers. Second, we propose a volumetric, submap-based SLAM technique that can successfully leverage map information to correct for drift in the mapping platform’s pose estimate. Third, and informed by the previous two contributions, we propose a dense approach to the sonar-based reconstruction problem, in which the pose estimation, sonar segmentation and model optimization problems are tackled simultaneously under the unified framework of factor graphs. This stands in contrast with the traditional approach where the sensor processing and segmentation, pose estimation, and model reconstruction problems are solved independently. Finally, we provide experimental results obtained over several deployments of a commercial inspection platform that validate the proposed techniques.This work was generously supported by the Office of Naval Research1, the MIT-Portugal Program, and the Schlumberger Technology Corporation

    A Scale Independent Selection Process for 3D Object Recognition in Cluttered Scenes

    Get PDF
    During the last years a wide range of algorithms and devices have been made available to easily acquire range images. The increasing abundance of depth data boosts the need for reliable and unsupervised analysis techniques, spanning from part registration to automated segmentation. In this context, we focus on the recognition of known objects in cluttered and incomplete 3D scans. Locating and fitting a model to a scene are very important tasks in many scenarios such as industrial inspection, scene understanding, medical imaging and even gaming. For this reason, these problems have been addressed extensively in the literature. Several of the proposed methods adopt local descriptor-based approaches, while a number of hurdles still hinder the use of global techniques. In this paper we offer a different perspective on the topic: We adopt an evolutionary selection algorithm that seeks global agreement among surface points, while operating at a local level. The approach effectively extends the scope of local descriptors by actively selecting correspondences that satisfy global consistency constraints, allowing us to attack a more challenging scenario where model and scene have different, unknown scales. This leads to a novel and very effective pipeline for 3D object recognition, which is validated with an extensive set of experiment

    A strategy for the visual recognition of objects in an industrial environment.

    Get PDF
    This thesis is concerned with the problem of recognizing industrial objects rapidly and flexibly. The system design is based on a general strategy that consists of a generalized local feature detector, an extended learning algorithm and the use of unique structure of the objects. Thus, the system is not designed to be limited to the industrial environment. The generalized local feature detector uses the gradient image of the scene to provide a feature description that is insensitive to a range of imaging conditions such as object position, and overall light intensity. The feature detector is based on a representative point algorithm which is able to reduce the data content of the image without restricting the allowed object geometry. Thus, a major advantage of the local feature detector is its ability to describe and represent complex object structure. The reliance on local features also allows the system to recognize partially visible objects. The task of the learning algorithm is to observe the feature description generated by the feature detector in order to select features that are reliable over the range of imaging conditions of interest. Once a set of reliable features is found for each object, the system finds unique relational structure which is later used to recognize the objects. Unique structure is a set of descriptions of unique subparts of the objects of interest. The present implementation is limited to the use of unique local structure. The recognition routine uses these unique descriptions to recognize objects in new images. An important feature of this strategy is the transference of a large amount of processing required for graph matching from the recognition stage to the learning stage, which allows the recognition routine to execute rapidly. The test results show that the system is able to function with a significant level of insensitivity to operating conditions; The system shows insensitivity to its 3 main assumptions -constant scale, constant lighting, and 2D images- displaying a degree of graceful degradation when the operating conditions degrade. For example, for one set of test objects, the recognition threshold was reached when the absolute light level was reduced by 70%-80%, or the object scale was reduced by 30%-40%, or the object was tilted away from the learned 2D plane by 300-400. This demonstrates a very important feature of the learning strategy: It shows that the generalizations made by the system are not only valid within the domain of the sampled set of images, but extend outside this domain. The test results also show that the recognition routine is able to execute rapidly, requiring 10ms-500ms (on a PDP11/24 minicomputer) in the special case when ideal operating conditions are guaranteed. (Note: This does not include pre-processing time). This thesis describes the strategy, the architecture and the implementation of the vision system in detail, and gives detailed test results. A proposal for extending the system to scale independent 3D object recognition is also given

    Statistical Methods for Image Registration and Denoising

    Get PDF
    This dissertation describes research into image processing techniques that enhance military operational and support activities. The research extends existing work on image registration by introducing a novel method that exploits local correlations to improve the performance of projection-based image registration algorithms. The dissertation also extends the bounds on image registration performance for both projection-based and full-frame image registration algorithms and extends the Barankin bound from the one-dimensional case to the problem of two-dimensional image registration. It is demonstrated that in some instances, the Cramer-Rao lower bound is an overly-optimistic predictor of image registration performance and that under some conditions, the Barankin bound is a better predictor of shift estimator performance. The research also looks at the related problem of single-frame image denoising using block-based methods. The research introduces three algorithms that operate by identifying regions of interest within a noise-corrupted image and then generating noise free estimates of the regions as averages of similar regions in the image

    Pose Invariant 3D Face Authentication based on Gaussian Fields Approach

    Get PDF
    This thesis presents a novel illuminant invariant approach to recognize the identity of an individual from his 3D facial scan in any pose, by matching it with a set of frontal models stored in the gallery. In view of today’s security concerns, 3D face reconstruction and recognition has gained a significant position in computer vision research. The non intrusive nature of facial data acquisition makes face recognition one of the most popular approaches for biometrics-based identity recognition. Depth information of a 3D face can be used to solve the problems of illumination and pose variation associated with face recognition. The proposed method makes use of 3D geometric (point sets) face representations for recognizing faces. The use of 3D point sets to represent human faces in lieu of 2D texture makes this method robust to changes in illumination and pose. The method first automatically registers facial point-sets of the probe with the gallery models through a criterion based on Gaussian force fields. The registration method defines a simple energy function, which is always differentiable and convex in a large neighborhood of the alignment parameters; allowing for the use of powerful standard optimization techniques. The new method overcomes the necessity of close initialization and converges in much less iterations as compared to the Iterative Closest Point algorithm. The use of an optimization method, the Fast Gauss Transform, allows a considerable reduction in the computational complexity of the registration algorithm. Recognition is then performed by using the robust similarity score generated by registering 3D point sets of faces. Our approach has been tested on a large database of 85 individuals with 521 scans at different poses, where the gallery and the probe images have been acquired at significantly different times. The results show the potential of our approach toward a fully pose and illumination invariant system. Our method can be successfully used as a potential biometric system in various applications such as mug shot matching, user verification and access control, and enhanced human computer interaction

    Multi-scale metrology for automated non-destructive testing systems

    Get PDF
    This thesis was previously held under moratorium from 5/05/2020 to 5/05/2022The use of lightweight composite structures in the aerospace industry is now commonplace. Unlike conventional materials, these parts can be moulded into complex aerodynamic shapes, which are diffcult to inspect rapidly using conventional Non-Destructive Testing (NDT) techniques. Industrial robots provide a means of automating the inspection process due to their high dexterity and improved path planning methods. This thesis concerns using industrial robots as a method for assessing the quality of components with complex geometries. The focus of the investigations in this thesis is on improving the overall system performance through the use of concepts from the field of metrology, specifically calibration and traceability. The use of computer vision is investigated as a way to increase automation levels by identifying a component's type and approximate position through comparison with CAD models. The challenges identified through this research include developing novel calibration techniques for optimising sensor integration, verifying system performance using laser trackers, and improving automation levels through optical sensing. The developed calibration techniques are evaluated experimentally using standard reference samples. A 70% increase in absolute accuracy was achieved in comparison to manual calibration techniques. Inspections were improved as verified by a 30% improvement in ultrasonic signal response. A new approach to automatically identify and estimate the pose of a component was developed specifically for automated NDT applications. The method uses 2D and 3D camera measurements along with CAD models to extract and match shape information. It was found that optical large volume measurements could provide suffciently high accuracy measurements to allow ultrasonic alignment methods to work, establishing a multi-scale metrology approach to increasing automation levels. A classification framework based on shape outlines extracted from images was shown to provide over 88% accuracy on a limited number of samples.The use of lightweight composite structures in the aerospace industry is now commonplace. Unlike conventional materials, these parts can be moulded into complex aerodynamic shapes, which are diffcult to inspect rapidly using conventional Non-Destructive Testing (NDT) techniques. Industrial robots provide a means of automating the inspection process due to their high dexterity and improved path planning methods. This thesis concerns using industrial robots as a method for assessing the quality of components with complex geometries. The focus of the investigations in this thesis is on improving the overall system performance through the use of concepts from the field of metrology, specifically calibration and traceability. The use of computer vision is investigated as a way to increase automation levels by identifying a component's type and approximate position through comparison with CAD models. The challenges identified through this research include developing novel calibration techniques for optimising sensor integration, verifying system performance using laser trackers, and improving automation levels through optical sensing. The developed calibration techniques are evaluated experimentally using standard reference samples. A 70% increase in absolute accuracy was achieved in comparison to manual calibration techniques. Inspections were improved as verified by a 30% improvement in ultrasonic signal response. A new approach to automatically identify and estimate the pose of a component was developed specifically for automated NDT applications. The method uses 2D and 3D camera measurements along with CAD models to extract and match shape information. It was found that optical large volume measurements could provide suffciently high accuracy measurements to allow ultrasonic alignment methods to work, establishing a multi-scale metrology approach to increasing automation levels. A classification framework based on shape outlines extracted from images was shown to provide over 88% accuracy on a limited number of samples

    Statistical Assessment of the Significance of Fracture Fits in Trace Evidence

    Get PDF
    Fracture fits are often regarded as the highest degree of association of trace materials due to the common belief that inherently random fracturing events produce individualizing patterns. Often referred to as physical matches, fracture matches, or physical fits, these assessments consist of the realignment of two or more items with distinctive features and edge morphologies to demonstrate they were once part of the same object. Separated materials may provide a valuable link between items, individuals, or locations in forensic casework in a variety of criminal situations. Physical fit examinations require the use of the examiner’s judgment, which rarely can be supported by a quantifiable uncertainty or vastly reported error rates. Therefore, there is a need to develop, validate, and standardize fracture fit examination methodology and respective interpretation protocols. This research aimed to develop systematic methods of examination and quantitative measures to assess the significance of trace evidence physical fits. This was facilitated through four main objectives: 1) an in-depth review manuscript consisting of 112 case reports, fractography studies, and quantitative-based studies to provide an organized summary establishing the current physical fit research base, 2) a pilot inter-laboratory study of a systematic, score-based technique previously developed by our research group for evaluation of duct tape physical fit pairs and referred as the Edge Similarity Score (ESS), 3) the initial expansion of ESS methodology into textile materials, and 4) an expanded optimization and evaluation study of X-ray Fluorescence (XRF) Spectroscopy for electrical tape backing analysis, for implementation in an amorphous material of which physical fits may not be feasible due to lack of distinctive features. Objective 1 was completed through a large-scale literature review and manuscript compilation of 112 fracture fit reports and research studies. Literature was evaluated in three overall categories: case reports, fractography or qualitative-based studies, and quantitative-based studies. In addition, 12 standard operating protocols (SOP) provided by various state and federal-level forensic laboratories were reviewed to provide an assessment of current physical fit practice. A review manuscript was submitted to Forensic Science International and has been accepted for publication. This manuscript provides for the first time, a literature review of physical fits of trace materials and served as the basis for this project. The pilot inter-laboratory study (Objective 2) consisted of three study kits, each consisting of 7 duct tape comparison pairs with a ground truth of 4 matching pairs (3 of expected M+ qualifier range, 1 of the more difficult M- range) and 3 non-matching pairs (NM). The kits were distributed as a Round Robin study resulting in 16 overall participants and 112 physical fit comparisons. Prior to kit distribution, a consensus on each sample’s ESS was reached between 4 examiners with an agreement criterion of better than ± 10% ESS. Along with the physical comparison pairs, the study iii included a brief, post-study survey allowing the distributors to receive feedback on the participants’ opinions on method ease of use and practicality. No misclassifications were observed across all study kits. The majority (86.6%) of reported ESS scores were within ± 20 ESS compared to consensus values determined before the administration of the test. Accuracy ranged from 88% to 100%, depending on the criteria used for evaluation of the error rates. In addition, on average, 77% of ESS attributed no significant differences from the respective pre-distribution, consensus mean scores when subjected to ANOVA-Dunnett’s analysis using the level of difficulty as blocking variables. These differences were more often observed on sets of higher difficulty (M-, 5 out of 16 participants, or 31%) than on lower difficulty sets (M+ or M-, 3 out of 16 participants, or 19%). Three main observations were derived from the participant results: 1) overall good agreement between ESS reported by examiners was observed, 2) the ESS score represented a good indicator of the quality of the match and rendered low percent of error rates on conclusions 3) those examiners that did not participate in formal method training tended to have ESS falling outside of expected pre-distribution ranges. This interlaboratory study serves as an important precedent, as it represents the largest inter-laboratory study ever reported using a quantitative assessment of physical fits of duct tapes. In addition, the study provides valuable insights to move forward with the standardization of protocols of examination and interpretation. Objective 3 consisted of a preliminary study on the assessment of 274 total comparisons of stabbed (N=100) and hand-torn (N=174) textile pairs as completed by two examiners. The first 74 comparisons resulted in a high incidence of false exclusions (63%) on textiles prone to distortion, revealing the need to assess suitability prior to physical fit examination of fabrics. For the remaining dataset, five clothing items were subject to fracture of various textile composition and construction. The overall set consisted of 100 comparison pairs, 20 per textile item, 10 each per separation method of stabbed or hand-torn fractured edges, each examined by two analysts. Examiners determined ESS through the analysis of 10 bins of equal divisions of the total fracture edge length. A weighted ESS was also determined with the addition of three optional weighting factors per bin due to the continuation of a pattern, separation characteristics (i.e. damage or protrusions/gaps), or partial pattern fluorescence across the fractured edges. With the addition of a weighted ESS, a rarity ratio was determined as the ratio between the weighted ESS and non-weighted ESS. In addition, the frequency of occurrence of all noted distinctive characteristics leading to the addition of a weighting factor by the examiner was determined. Overall, 93% accuracy was observed for the hand-torn set while 95% accuracy was observed for the stabbed set. Higher misclassification in the hand-torn set was observed in textile items of either 100% polyester composition or jersey knit construction, as higher elasticity led to greater fracture edge distortion. In addition, higher misclassification was observed in the stabbed set for those textiles of no pattern as the stabbed edges led to straight, featureless bins often only associated due to pattern continuation. The results of this study are anticipated to provide valuable knowledge for the future development of protocols for evaluation of relevant features of textile fractures and assessments of the suitability for fracture fit comparisons. Finally, the XRF methodology optimization and evaluation study (Objective 4) expanded upon our group’s previous discrimination studies by broadening the total sample set of characterized iv tapes and evaluating the use of spectral overlay, spectral contrast angle, and Quadratic Discriminant Analysis (QDA) for the comparison of XRF spectra. The expanded sample set consisted of 114 samples, 94 from different sources, and 20 from the same roll. Twenty sections from the same roll were used to assess intra-roll variability, and for each sample, replicate measurements on different locations of the tape were analyzed (n=3) to assess the intra-sample variability. Inter-source variability was evaluated through 94 rolls of tapes of a variety of labeled brands, manufacturers, and product names. Parameter optimization included a comparison of atmospheric conditions, collection times, and instrumental filters. A study of the effects of adhesive and backing thickness on spectrum collection revealed key implications to the method that required modification to the sample support material Figures of merit assessed included accuracy and discrimination over time, precision, sensitivity, and selectivity. One of the most important contributions of this study is the proposal of alternative objective methods of spectral comparisons. The performance of different methods for comparing and contrasting spectra was evaluated. The optimization of this method was part of an assessment to incorporate XRF to a forensic laboratory protocol for rapid, highly informative elemental analysis of electrical tape backings and to expand examiners’ casework capabilities in the circumstance that a physical fit conclusion is limited due to the amorphous nature of electrical tape backings. Overall, this work strengthens the fracture fit research base by further developing quantitative methodologies for duct tape and textile materials and initiating widespread distribution of the technique through an inter-laboratory study to begin steps towards laboratory implementation. Additional projects established the current state of forensic physical fit to provide the foundation from which future quantitative work such as the studies presented here must grow and provided highly sensitive techniques of analysis for materials that present limited fracture fit capabilities
    • …
    corecore