703 research outputs found

    Review of the mathematical foundations of data fusion techniques in surface metrology

    Get PDF
    The recent proliferation of engineered surfaces, including freeform and structured surfaces, is challenging current metrology techniques. Measurement using multiple sensors has been proposed to achieve enhanced benefits, mainly in terms of spatial frequency bandwidth, which a single sensor cannot provide. When using data from different sensors, a process of data fusion is required and there is much active research in this area. In this paper, current data fusion methods and applications are reviewed, with a focus on the mathematical foundations of the subject. Common research questions in the fusion of surface metrology data are raised and potential fusion algorithms are discussed

    LiDAR based multi-sensor fusion for localization, mapping, and tracking

    Get PDF
    Viimeisen vuosikymmenen aikana täysin itseohjautuvien ajoneuvojen kehitys on herättänyt laajaa kiinnostusta niin teollisuudessa kuin tiedemaailmassakin, mikä on merkittävästi edistänyt tilannetietoisuuden ja anturiteknologian kehitystä. Erityisesti LiDAR-anturit ovat nousseet keskeiseen rooliin monissa havainnointijärjestelmissä niiden tarjoaman pitkän kantaman havaintokyvyn, tarkan 3D-etäisyystiedon ja luotettavan suorituskyvyn ansiosta. LiDAR-teknologian kehittyminen on mahdollistanut entistä luotettavampien ja kustannustehokkaampien antureiden käytön, mikä puolestaan on osoittanut suurta potentiaalia parantaa laajasti käytettyjen kuluttajatuotteiden tilannetietoisuutta. Uusien LiDAR-antureiden hyödyntäminen tarjoaa tutkijoille monipuolisen valikoiman tehokkaita työkaluja, joiden avulla voidaan ratkaista paikannuksen, kartoituksen ja seurannan haasteita nykyisissä havaintojärjestelmissä. Tässä väitöskirjassa tutkitaan LiDAR-pohjaisia sensorifuusioalgoritmeja. Tutkimuksen pääpaino on tiheässä kartoituksessa ja globaalissa paikan-nuksessa erilaisten LiDAR-anturien avulla. Tutkimuksessa luodaan kattava tietokanta uusien LiDAR-, IMU- ja kamera-antureiden tuottamasta datasta. Tietokanta on välttämätön kehittyneiden anturifuusioalgoritmien ja yleiskäyttöisten paikannus- ja kartoitusalgoritmien kehittämiseksi. Tämän lisäksi väitöskirjassa esitellään innovatiivisia menetelmiä globaaliin paikannukseen erilaisissa ympäristöissä. Esitellyt menetelmät kartoituksen tarkkuuden ja tilannetietoisuuden parantamiseksi ovat muun muassa modulaarinen monen LiDAR-anturin odometria ja kartoitus, toimintavarma multimodaalinen LiDAR-inertiamittau-sjärjestelmä ja tiheä kartoituskehys. Tutkimus integroi myös kiinteät LiDAR -anturit kamerapohjaisiin syväoppimismenetelmiin kohteiden seurantaa varten parantaen kartoituksen tarkkuutta dynaamisissa ympäristöissä. Näiden edistysaskeleiden avulla autonomisten järjestelmien luotettavuutta ja tehokkuutta voidaan merkittävästi parantaa todellisissa käyttöympäristöissä. Väitöskirja alkaa esittelemällä innovatiiviset anturit ja tiedonkeruualustan. Tämän jälkeen esitellään avoin tietokanta, jonka avulla voidaan arvioida kehittyneitä paikannus- ja kartoitusalgoritmeja hyödyntäen ainutlaatuista perustotuuden kehittämismenetelmää. Työssä käsitellään myös kahta haastavaa paikannusympäristöä: metsä- ja kaupunkiympäristöä. Lisäksi tarkastellaan kohteen seurantatehtäviä sekä kameraettä LiDAR-tekniikoilla ihmisten ja pienten droonien seurannassa. ---------------------- The development of fully autonomous driving vehicles has become a key focus for both industry and academia over the past decade, fostering significant progress in situational awareness abilities and sensor technology. Among various types of sensors, the LiDAR sensor has emerged as a pivotal component in many perception systems due to its long-range detection capabilities, precise 3D range information, and reliable performance in diverse environments. With advancements in LiDAR technology, more reliable and cost-effective sensors have shown great potential for improving situational awareness abilities in widely used consumer products. By leveraging these novel LiDAR sensors, researchers now have a diverse set of powerful tools to effectively tackle the persistent challenges in localization, mapping, and tracking within existing perception systems. This thesis explores LiDAR-based sensor fusion algorithms to address perception challenges in autonomous systems, with a primary focus on dense mapping and global localization using diverse LiDAR sensors. The research involves the integration of novel LiDARs, IMU, and camera sensors to create a comprehensive dataset essential for developing advanced sensor fusion and general-purpose localization and mapping algorithms. Innovative methodologies for global localization across varied environments are introduced. These methodologies include a robust multi-modal LiDAR inertial odometry and a dense mapping framework, which enhance mapping precision and situational awareness. The study also integrates solid-state LiDARs with camera-based deep-learning techniques for object tracking, refining mapping accuracy in dynamic environments. These advancements significantly enhance the reliability and efficiency of autonomous systems in real-world scenarios. The thesis commences with an introduction to innovative sensors and a data collection platform. It proceeds by presenting an open-source dataset designed for the evaluation of advanced SLAM algorithms, utilizing a unique ground-truth generation method. Subsequently, the study tackles two localization challenges in forest and urban environments. Furthermore, it highlights the MM-LOAM dense mapping framework. Additionally, the research explores object-tracking tasks, employing both camera and LiDAR technologies for human and micro UAV tracking

    A Novel Framework for Highlight Reflectance Transformation Imaging

    Get PDF
    We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa

    A novel segmentation approach for crop modeling using a plenoptic light-field camera: going from 2D to 3D

    Get PDF
    Crop phenotyping is a desirable task in crop characterization since it allows the farmer to make early decisions, and therefore be more productive. This research is motivated by the generation of tools for rice crop phenotyping within the OMICAS research ecosystem framework. It proposes implementing the image process- ing technologies and artificial intelligence technics through a multisensory approach with multispectral information. Three main stages are covered: (i) A segmentation approach that allows identifying the biological material associated with plants, and the main contri- bution is the GFKuts segmentation approach; (ii) a strategy that allows the development of sensory fusion between three different cameras, a 3D camera, an infrared multispectral camera, and a thermal multispectral camera, this stage is developed through a complex object detection approach; and (iii) the characterization of a 4D model that generates topological relationships with the information of the point cloud, the main contribution of this strategy is the improvement of the point cloud captured by the 3D sensor, in this sense, this stage improves the acquisition of any 3D sensor. This research presents a development that receives information from multiple sensors, especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This model integrates the color information of 5 channels and topological information, relating the points in space. Overall, the research allows the integration of the 3D information from any sensor\technology and the multispectral channels from any multispectral camera, to generate direct non-invasive measurements on the plant.OMICASCrop phenotyping is a desirable task in crop characterization since it allows the farmer to make early decisions, and therefore be more productive. This research is motivated by the generation of tools for rice crop phenotyping within the OMICAS research ecosystem framework. It proposes implementing the image process- ing technologies and artificial intelligence technics through a multisensory approach with multispectral information. Three main stages are covered: (i) A segmentation approach that allows identifying the biological material associated with plants, and the main contri- bution is the GFKuts segmentation approach; (ii) a strategy that allows the development of sensory fusion between three different cameras, a 3D camera, an infrared multispectral camera, and a thermal multispectral camera, this stage is developed through a complex object detection approach; and (iii) the characterization of a 4D model that generates topological relationships with the information of the point cloud, the main contribution of this strategy is the improvement of the point cloud captured by the 3D sensor, in this sense, this stage improves the acquisition of any 3D sensor. This research presents a development that receives information from multiple sensors, especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This model integrates the color information of 5 channels and topological information, relating the points in space. Overall, the research allows the integration of the 3D information from any sensor\technology and the multispectral channels from any multispectral camera, to generate direct non-invasive measurements on the plant.Magíster en Ingeniería ElectrónicaMaestríahttps://orcid.org/ 0000-0002-1477-6825https://scholar.google.com/citations?user=cpuxcwgAAAAJ&hl=eshttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0001556911Porque aun me encuentro desarrollando la investigación y quiero darle mas profundidad

    Development of a GIS-based method for sensor network deployment and coverage optimization

    Get PDF
    Au cours des dernières années, les réseaux de capteurs ont été de plus en plus utilisés dans différents contextes d’application allant de la surveillance de l’environnement au suivi des objets en mouvement, au développement des villes intelligentes et aux systèmes de transport intelligent, etc. Un réseau de capteurs est généralement constitué de nombreux dispositifs sans fil déployés dans une région d'intérêt. Une question fondamentale dans un réseau de capteurs est l'optimisation de sa couverture spatiale. La complexité de l'environnement de détection avec la présence de divers obstacles empêche la couverture optimale de plusieurs zones. Par conséquent, la position du capteur affecte la façon dont une région est couverte ainsi que le coût de construction du réseau. Pour un déploiement efficace d'un réseau de capteurs, plusieurs algorithmes d'optimisation ont été développés et appliqués au cours des dernières années. La plupart de ces algorithmes reposent souvent sur des modèles de capteurs et de réseaux simplifiés. En outre, ils ne considèrent pas certaines informations spatiales de l'environnement comme les modèles numériques de terrain, les infrastructures construites humaines et la présence de divers obstacles dans le processus d'optimisation. L'objectif global de cette thèse est d'améliorer les processus de déploiement des capteurs en intégrant des informations et des connaissances géospatiales dans les algorithmes d'optimisation. Pour ce faire, trois objectifs spécifiques sont définis. Tout d'abord, un cadre conceptuel est développé pour l'intégration de l'information contextuelle dans les processus de déploiement des réseaux de capteurs. Ensuite, sur la base du cadre proposé, un algorithme d'optimisation sensible au contexte local est développé. L'approche élargie est un algorithme local générique pour le déploiement du capteur qui a la capacité de prendre en considération de l'information spatiale, temporelle et thématique dans différents contextes d'applications. Ensuite, l'analyse de l'évaluation de la précision et de la propagation d'erreurs est effectuée afin de déterminer l'impact de l'exactitude des informations contextuelles sur la méthode d'optimisation du réseau de capteurs proposée. Dans cette thèse, l'information contextuelle a été intégrée aux méthodes d'optimisation locales pour le déploiement de réseaux de capteurs. L'algorithme développé est basé sur le diagramme de Voronoï pour la modélisation et la représentation de la structure géométrique des réseaux de capteurs. Dans l'approche proposée, les capteurs change leur emplacement en fonction des informations contextuelles locales (l'environnement physique, les informations de réseau et les caractéristiques des capteurs) visant à améliorer la couverture du réseau. La méthode proposée est implémentée dans MATLAB et est testée avec plusieurs jeux de données obtenus à partir des bases de données spatiales de la ville de Québec. Les résultats obtenus à partir de différentes études de cas montrent l'efficacité de notre approche.In recent years, sensor networks have been increasingly used for different applications ranging from environmental monitoring, tracking of moving objects, development of smart cities and smart transportation system, etc. A sensor network usually consists of numerous wireless devices deployed in a region of interest. A fundamental issue in a sensor network is the optimization of its spatial coverage. The complexity of the sensing environment with the presence of diverse obstacles results in several uncovered areas. Consequently, sensor placement affects how well a region is covered by sensors as well as the cost for constructing the network. For efficient deployment of a sensor network, several optimization algorithms are developed and applied in recent years. Most of these algorithms often rely on oversimplified sensor and network models. In addition, they do not consider spatial environmental information such as terrain models, human built infrastructures, and the presence of diverse obstacles in the optimization process. The global objective of this thesis is to improve sensor deployment processes by integrating geospatial information and knowledge in optimization algorithms. To achieve this objective three specific objectives are defined. First, a conceptual framework is developed for the integration of contextual information in sensor network deployment processes. Then, a local context-aware optimization algorithm is developed based on the proposed framework. The extended approach is a generic local algorithm for sensor deployment, which accepts spatial, temporal, and thematic contextual information in different situations. Next, an accuracy assessment and error propagation analysis is conducted to determine the impact of the accuracy of contextual information on the proposed sensor network optimization method. In this thesis, the contextual information has been integrated in to the local optimization methods for sensor network deployment. The extended algorithm is developed based on point Voronoi diagram in order to represent geometrical structure of sensor networks. In the proposed approach sensors change their location based on local contextual information (physical environment, network information and sensor characteristics) aiming to enhance the network coverage. The proposed method is implemented in MATLAB and tested with several data sets obtained from Quebec City spatial database. Obtained results from different case studies show the effectiveness of our approach

    PDS-MAR: a fine-grained Projection-Domain Segmentation-based Metal Artifact Reduction method for intraoperative CBCT images with guidewires

    Full text link
    Since the invention of modern CT systems, metal artifacts have been a persistent problem. Due to increased scattering, amplified noise, and insufficient data collection, it is more difficult to suppress metal artifacts in cone-beam CT, limiting its use in human- and robot-assisted spine surgeries where metallic guidewires and screws are commonly used. In this paper, we demonstrate that conventional image-domain segmentation-based MAR methods are unable to eliminate metal artifacts for intraoperative CBCT images with guidewires. To solve this problem, we present a fine-grained projection-domain segmentation-based MAR method termed PDS-MAR, in which metal traces are augmented and segmented in the projection domain before being inpainted using triangular interpolation. In addition, a metal reconstruction phase is proposed to restore metal areas in the image domain. The digital phantom study and real CBCT data study demonstrate that the proposed algorithm achieves significantly better artifact suppression than other comparing methods and has the potential to advance the use of intraoperative CBCT imaging in clinical spine surgeries.Comment: 19 Page

    Event-based Motion Segmentation with Spatio-Temporal Graph Cuts

    Full text link
    Identifying independently moving objects is an essential task for dynamic scene understanding. However, traditional cameras used in dynamic scenes may suffer from motion blur or exposure artifacts due to their sampling principle. By contrast, event-based cameras are novel bio-inspired sensors that offer advantages to overcome such limitations. They report pixelwise intensity changes asynchronously, which enables them to acquire visual information at exactly the same rate as the scene dynamics. We develop a method to identify independently moving objects acquired with an event-based camera, i.e., to solve the event-based motion segmentation problem. We cast the problem as an energy minimization one involving the fitting of multiple motion models. We jointly solve two subproblems, namely event cluster assignment (labeling) and motion model fitting, in an iterative manner by exploiting the structure of the input event data in the form of a spatio-temporal graph. Experiments on available datasets demonstrate the versatility of the method in scenes with different motion patterns and number of moving objects. The evaluation shows state-of-the-art results without having to predetermine the number of expected moving objects. We release the software and dataset under an open source licence to foster research in the emerging topic of event-based motion segmentation

    Yield prediction by machine learning from UAS‑based mulit‑sensor data fusion in soybean

    Get PDF
    16 p.Nowadays, automated phenotyping of plants is essential for precise and cost-effective improvement in the efficiency of crop genetics. In recent years, machine learning (ML) techniques have shown great success in the classification and modelling of crop parameters. In this research, we consider the capability of ML to perform grain yield prediction in soybeans by combining data from different optical sensors via RF (Random Forest) and XGBoost (eXtreme Gradient Boosting). During the 2018 growing season, a panel of 382 soybean recombinant inbred lines were evaluated in a yield trial at the Agronomy Center for Research and Education (ACRE) in West Lafayette (Indiana, USA). Images were acquired by the Parrot Sequoia Multispectral Sensor and the S.O.D.A. compact digital camera on board a senseFly eBee UAS (Unnamed Aircraft System) solution at R4 and early R5 growth stages. Next, a standard photogrammetric pipeline was carried out by SfM (Structure from Motion). Multispectral imagery serves to analyse the spectral response of the soybean end-member in 2D. In addition, RGB images were used to reconstruct the study area in 3D, evaluating the physiological growth dynamics per plot via height variations and crop volume estimations. As ground truth, destructive grain yield measurements were taken at the end of the growing season.SI"Development of Analytical Tools for Drone-based Canopy Phenotyping in Crop Breeding" (American Institute of Food and Agriculture
    corecore