496 research outputs found

    Studies on neutron diffraction and X-ray radiography for material inspection

    Get PDF
    Among the different probes to study the structures of the bio and structural materials, X-ray and neutron are widely used because of their distinctive usefulness in investigating different structures. X-ray radiography and neutron diffraction are two widely known non-destructive techniques for material inspection. Here we demonstrate the design of neutron diffractometer with low power source and analyze the digital image produced by the X-ray radiography instead of neutron diffraction because of the availability of the data. Neutron diffraction is a powerful tool for understanding the behavior of crystal structures and phase behaviors of materials. While neutron diffraction capabilities continue to explore new frontiers of materials science, such capabilities currently exist in limited places, which require high neutron flux. The study seeks to design a low-resolution neutron diffraction system that can be installed on low power reactors (e.g. 250 kW thermal power). The performance of the diffractometer is estimated using Monte-Carlo ray-tracing simulations with McStas with an application in material science. Both monochromatic and polychromatic configurations are considered in order to maximize the net diffracted neutron flux at the detectors with reasonable resolution. On the other hand, considering X-ray radiography as a structure inspecting technique, analysis of dental X-ray panorama is performed for the detection of oral lesions. A novel automatic computer-aided method to identify dental lesions from dental X-ray is presented. Morphological operations, intensity profile analysis, automated seed point selection, region growing, feature extraction and neural network application are carried out to perform the job. Results show that the performance of the proposed method surpasses existing automated methods utilizing dental X-rays --Abstract, page iii

    ClearPhoto - augmented photography

    Get PDF
    The widespread use of mobile devices has made known to the general public new areas that were hitherto confined to specialized devices. In general, the smartphone came to give all users the ability to execute multiple tasks, and among them, take photographs using the integrated cameras. Although these devices are continuously receiving improved cameras, their manufacturers do not take advantage of their full potential, since the operating systems normally offer simple APIs and applications for shooting. Therefore, taking advantage of this environment for mobile devices, we find ourselves in the best scenario to develop applications that help the user obtaining a good result when shooting. In an attempt to provide a set of techniques and tools more applied to the task, this dissertation presents, as a contribution, a set of tools for mobile devices that provides information in real-time on the composition of the scene before capturing an image. Thus, the proposed solution gives support to a user while capturing a scene with a mobile device. The user will be able to receive multiple suggestions on the composition of the scene, which will be based on rules of photography or other useful tools for photographers. The tools include horizon detection and graphical visualization of the color palette presented on the scenario being photographed. These tools were evaluated regarding the mobile device implementation and how users assess their usefulness

    Coupling ground-level panoramas and aerial imagery for change detection

    Get PDF
    International audienceGeographic landscapes in all over the world may be subject to rapid changes induced, for instance, by urban, forest, and agricultural evolutions. Monitoring such kind of changes is usually achieved through remote sensing. However, obtaining regular and up-to-date aerial or satellite images is found to be a high costly process, thus preventing regular updating of land cover maps. Alternatively, in this paper, we propose a low-cost solution based on the use of ground-level geo-located landscape panoramic photos providing high spatial resolution information of the scene. Such photos can be acquired from various sources: digital cameras, smartphone, or even web repositories. Furthermore, since the acquisition is performed at the ground level, the users' immediate surroundings, as sensed by a camera device, can provide information at a very high level of precision, enabling to update the land cover type of the geographic area. In the described herein method, we propose to use inverse perspective mapping (inverse warping) to transform the geo-tagged ground-level 360 • photo onto a top-down view as if it had been acquired from a nadiral aerial view. Once re-projected, the warped photo is compared to a previously acquired remotely sensed image using standard techniques such as correlation. Wide differences in orientation, resolution, and geographical extent between the top-down view and the aerial image are addressed through specific processing steps (e.g. registration). Experiments on publicly available data-sets made of both ground-level photos and aerial images show promising results for updating land cover maps with mobile technologies. Finally, the proposed approach contributes to the crowdsourcing efforts in geo-information processing and mapping, providing hints on the evolution of a landscape. ARTICLE HISTOR

    Intelligent road lane mark extraction using a Mobile Mapping System

    Get PDF
    102 p.During the last years, road landmark in- ventory has raised increasing interest in different areas: the maintenance of transport infrastructures, road 3d modelling, GIS applications, etc. The lane mark detection is posed as a two-class classification problem over a highly class imbalanced dataset. To cope with this imbalance we have applied Active Learning approaches. This Thesis has been divided into two main com- putational parts. In the first part, we have evaluated different Machine Learning approaches using panoramic images, obtained from image sensor, such as Random Forest (RF) and ensembles of Extreme Learning Machines (V-ELM), obtaining satisfactory results in the detection of road continuous lane marks. In the second part of the Thesis, we have applied a Random Forest algorithm to a LiDAR point cloud, obtaining a georeferenced road horizontal signs classification. We have not only identified continuous lines, but also, we have been able to identify every horizontal lane mark detected by the LiDAR sensor

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%
    • …
    corecore