11 research outputs found

    3D virtualization of an underground semi-submerged cave system

    Get PDF
    Underwater caves represent the most challenging scenario for exploration, mapping and 3D modelling. In such complex environment, unsuitable to humans, highly specialized skills and expensive equipment are normally required. Technological progress and scientific innovation attempt, nowadays, to develop safer and more automatic approaches for the virtualization of these complex and not easily accessible environments, which constitute a unique natural, biological and cultural heritage. This paper presents a pilot study realised for the virtualization of 'Grotta Giusti' (Fig. 1), an underground semi-submerged cave system in central Italy. After an introduction on the virtualization process in the cultural heritage domain and a review of techniques and experiences for the virtualization of underground and submerged environments, the paper will focus on the employed virtualization techniques. In particular, the developed approach to simultaneously survey the semi-submersed areas of the cave relying on a stereo camera system and the virtualization of the virtual cave will be discussed

    Outdoor navigation of mobile robots

    Get PDF
    AGVs in the manufacturing industry currently constitute the largest application area for mobile robots. Other applications have been gradually emerging, including various transporting tasks in demanding environments, such as mines or harbours. Most of the new potential applications require a free-ranging navigation system, which means that the path of a robot is no longer bound to follow a buried inductive cable. Moreover, changing the route of a robot or taking a new working area into use must be as effective as possible. These requirements set new challenges for the navigation systems of mobile robots. One of the basic methods of building a free ranging navigation system is to combine dead reckoning navigation with the detection of beacons at known locations. This approach is the backbone of the navigation systems in this study. The study describes research and development work in the area of mobile robotics including the applications in forestry, agriculture, mining, and transportation in a factory yard. The focus is on describing navigation sensors and methods for position and heading estimation by fusing dead reckoning and beacon detection information. A Kalman filter is typically used here for sensor fusion. Both cases of using either artificial or natural beacons have been covered. Artificial beacons used in the research and development projects include specially designed flat objects to be detected using a camera as the detection sensor, GPS satellite positioning system, and passive transponders buried in the ground along the route of a robot. The walls in a mine tunnel have been used as natural beacons. In this case, special attention has been paid to map building and using the map for positioning. The main contribution of the study is in describing the structure of a working navigation system, including positioning and position control. The navigation system for mining application, in particular, contains some unique features that provide an easy-to-use procedure for taking new production areas into use and making it possible to drive a heavy mining machine autonomously at speed comparable to an experienced human driver.reviewe

    Multi-task near-field perception for autonomous driving using surround-view fisheye cameras

    Get PDF
    Die Bildung der Augen fĂŒhrte zum Urknall der Evolution. Die Dynamik Ă€nderte sich von einem primitiven Organismus, der auf den Kontakt mit der Nahrung wartete, zu einem Organismus, der durch visuelle Sensoren gesucht wurde. Das menschliche Auge ist eine der raffiniertesten Entwicklungen der Evolution, aber es hat immer noch MĂ€ngel. Der Mensch hat ĂŒber Millionen von Jahren einen biologischen Wahrnehmungsalgorithmus entwickelt, der in der Lage ist, Autos zu fahren, Maschinen zu bedienen, Flugzeuge zu steuern und Schiffe zu navigieren. Die Automatisierung dieser FĂ€higkeiten fĂŒr Computer ist entscheidend fĂŒr verschiedene Anwendungen, darunter selbstfahrende Autos, Augmented RealitĂ€t und architektonische Vermessung. Die visuelle Nahfeldwahrnehmung im Kontext von selbstfahrenden Autos kann die Umgebung in einem Bereich von 0 - 10 Metern und 360° Abdeckung um das Fahrzeug herum wahrnehmen. Sie ist eine entscheidende Entscheidungskomponente bei der Entwicklung eines sichereren automatisierten Fahrens. JĂŒngste Fortschritte im Bereich Computer Vision und Deep Learning in Verbindung mit hochwertigen Sensoren wie Kameras und LiDARs haben ausgereifte Lösungen fĂŒr die visuelle Wahrnehmung hervorgebracht. Bisher stand die Fernfeldwahrnehmung im Vordergrund. Ein weiteres wichtiges Problem ist die begrenzte Rechenleistung, die fĂŒr die Entwicklung von Echtzeit-Anwendungen zur VerfĂŒgung steht. Aufgrund dieses Engpasses kommt es hĂ€ufig zu einem Kompromiss zwischen Leistung und Laufzeiteffizienz. Wir konzentrieren uns auf die folgenden Themen, um diese anzugehen: 1) Entwicklung von Nahfeld-Wahrnehmungsalgorithmen mit hoher Leistung und geringer RechenkomplexitĂ€t fĂŒr verschiedene visuelle Wahrnehmungsaufgaben wie geometrische und semantische Aufgaben unter Verwendung von faltbaren neuronalen Netzen. 2) Verwendung von Multi-Task-Learning zur Überwindung von RechenengpĂ€ssen durch die gemeinsame Nutzung von initialen Faltungsschichten zwischen den Aufgaben und die Entwicklung von Optimierungsstrategien, die die Aufgaben ausbalancieren.The formation of eyes led to the big bang of evolution. The dynamics changed from a primitive organism waiting for the food to come into contact for eating food being sought after by visual sensors. The human eye is one of the most sophisticated developments of evolution, but it still has defects. Humans have evolved a biological perception algorithm capable of driving cars, operating machinery, piloting aircraft, and navigating ships over millions of years. Automating these capabilities for computers is critical for various applications, including self-driving cars, augmented reality, and architectural surveying. Near-field visual perception in the context of self-driving cars can perceive the environment in a range of 0 - 10 meters and 360° coverage around the vehicle. It is a critical decision-making component in the development of safer automated driving. Recent advances in computer vision and deep learning, in conjunction with high-quality sensors such as cameras and LiDARs, have fueled mature visual perception solutions. Until now, far-field perception has been the primary focus. Another significant issue is the limited processing power available for developing real-time applications. Because of this bottleneck, there is frequently a trade-off between performance and run-time efficiency. We concentrate on the following issues in order to address them: 1) Developing near-field perception algorithms with high performance and low computational complexity for various visual perception tasks such as geometric and semantic tasks using convolutional neural networks. 2) Using Multi-Task Learning to overcome computational bottlenecks by sharing initial convolutional layers between tasks and developing optimization strategies that balance tasks

    Le nuage de point intelligent

    Full text link
    Discrete spatial datasets known as point clouds often lay the groundwork for decision-making applications. E.g., we can use such data as a reference for autonomous cars and robot’s navigation, as a layer for floor-plan’s creation and building’s construction, as a digital asset for environment modelling and incident prediction... Applications are numerous, and potentially increasing if we consider point clouds as digital reality assets. Yet, this expansion faces technical limitations mainly from the lack of semantic information within point ensembles. Connecting knowledge sources is still a very manual and time-consuming process suffering from error-prone human interpretation. This highlights a strong need for domain-related data analysis to create a coherent and structured information. The thesis clearly tries to solve automation problematics in point cloud processing to create intelligent environments, i.e. virtual copies that can be used/integrated in fully autonomous reasoning services. We tackle point cloud questions associated with knowledge extraction – particularly segmentation and classification – structuration, visualisation and interaction with cognitive decision systems. We propose to connect both point cloud properties and formalized knowledge to rapidly extract pertinent information using domain-centered graphs. The dissertation delivers the concept of a Smart Point Cloud (SPC) Infrastructure which serves as an interoperable and modular architecture for a unified processing. It permits an easy integration to existing workflows and a multi-domain specialization through device knowledge, analytic knowledge or domain knowledge. Concepts, algorithms, code and materials are given to replicate findings and extend current applications.Les ensembles discrets de donnĂ©es spatiales, appelĂ©s nuages de points, forment souvent le support principal pour des scĂ©narios d’aide Ă  la dĂ©cision. Par exemple, nous pouvons utiliser ces donnĂ©es comme rĂ©fĂ©rence pour les voitures autonomes et la navigation des robots, comme couche pour la crĂ©ation de plans et la construction de bĂątiments, comme actif numĂ©rique pour la modĂ©lisation de l'environnement et la prĂ©diction d’incidents... Les applications sont nombreuses et potentiellement croissantes si l'on considĂšre les nuages de points comme des actifs de rĂ©alitĂ© numĂ©rique. Cependant, cette expansion se heurte Ă  des limites techniques dues principalement au manque d'information sĂ©mantique au sein des ensembles de points. La crĂ©ation de liens avec des sources de connaissances est encore un processus trĂšs manuel, chronophage et liĂ© Ă  une interprĂ©tation humaine sujette Ă  l'erreur. Cela met en Ă©vidence la nĂ©cessitĂ© d'une analyse automatisĂ©e des donnĂ©es relatives au domaine Ă©tudiĂ© afin de crĂ©er une information cohĂ©rente et structurĂ©e. La thĂšse tente clairement de rĂ©soudre les problĂšmes d'automatisation dans le traitement des nuages de points pour crĂ©er des environnements intelligents, c'est-Ă dire des copies virtuelles qui peuvent ĂȘtre utilisĂ©es/intĂ©grĂ©es dans des services de raisonnement totalement autonomes. Nous abordons plusieurs problĂ©matiques liĂ©es aux nuages de points et associĂ©es Ă  l'extraction des connaissances - en particulier la segmentation et la classification - la structuration, la visualisation et l'interaction avec les systĂšmes cognitifs de dĂ©cision. Nous proposons de relier Ă  la fois les propriĂ©tĂ©s des nuages de points et les connaissances formalisĂ©es pour extraire rapidement les informations pertinentes Ă  l'aide de graphes centrĂ©s sur le domaine. La dissertation propose le concept d'une infrastructure SPC (Smart Point Cloud) qui sert d'architecture interopĂ©rable et modulaire pour un traitement unifiĂ©. Elle permet une intĂ©gration facile aux flux de travail existants et une spĂ©cialisation multidomaine grĂące aux connaissances liĂ©e aux capteurs, aux connaissances analytiques ou aux connaissances de domaine. Plusieurs concepts, algorithmes, codes et supports sont fournis pour reproduire les rĂ©sultats et Ă©tendre les applications actuelles.Diskrete rĂ€umliche DatensĂ€tze, so genannte Punktwolken, bilden oft die Grundlage fĂŒr Entscheidungsanwendungen. Beispielsweise können wir solche Daten als Referenz fĂŒr autonome Autos und Roboternavigation, als Ebene fĂŒr die Erstellung von Grundrissen und GebĂ€udekonstruktionen, als digitales Gut fĂŒr die Umgebungsmodellierung und Ereignisprognose verwenden... Die Anwendungen sind zahlreich und nehmen potenziell zu, wenn wir Punktwolken als Digital Reality Assets betrachten. Allerdings stĂ¶ĂŸt diese Erweiterung vor allem durch den Mangel an semantischen Informationen innerhalb von Punkt-Ensembles auf technische Grenzen. Die Verbindung von Wissensquellen ist immer noch ein sehr manueller und zeitaufwendiger Prozess, der unter fehleranfĂ€lliger menschlicher Interpretation leidet. Dies verdeutlicht den starken Bedarf an domĂ€nenbezogenen Datenanalysen, um eine kohĂ€rente und strukturierte Information zu schaffen. Die Arbeit versucht eindeutig, Automatisierungsprobleme in der Punktwolkenverarbeitung zu lösen, um intelligente Umgebungen zu schaffen, d.h. virtuelle Kopien, die in vollstĂ€ndig autonome Argumentationsdienste verwendet/integriert werden können. Wir befassen uns mit Punktwolkenfragen im Zusammenhang mit der Wissensextraktion - insbesondere Segmentierung und Klassifizierung - Strukturierung, Visualisierung und Interaktion mit kognitiven Entscheidungssystemen. Wir schlagen vor, sowohl Punktwolkeneigenschaften als auch formalisiertes Wissen zu verbinden, um schnell relevante Informationen mithilfe von domĂ€nenzentrierten Grafiken zu extrahieren. Die Dissertation liefert das Konzept einer Smart Point Cloud (SPC) Infrastruktur, die als interoperable und modulare Architektur fĂŒr eine einheitliche Verarbeitung dient. Es ermöglicht eine einfache Integration in bestehende Workflows und eine multidimensionale Spezialisierung durch GerĂ€tewissen, analytisches Wissen oder DomĂ€nenwissen. Konzepte, Algorithmen, Code und Materialien werden zur VerfĂŒgung gestellt, um Erkenntnisse zu replizieren und aktuelle Anwendungen zu erweitern

    Localisation par vision multi-spectrale (Application aux systÚmes embarqués)

    Get PDF
    La problĂ©matique SLAM (Simultaneous Localization and Mapping) est un thĂšme largement Ă©tudiĂ© au LAAS depuis plusieurs annĂ©es. L'application visĂ©e concerne le dĂ©veloppement d'un systĂšme d'aide au roulage sur aĂ©roport des avions de ligne, ce systĂšme devant ĂȘtre opĂ©rationnel quelques soient les conditions mĂ©tĂ©orologiques et de luminositĂ© (projet SART financĂ© par la DGE en partenariat avec principalement FLIR Systems, LatĂ©coĂšre et Thales).Lors de conditions de visibilitĂ© difficile (faible luminositĂ©, brouillard, pluie...), une seule camĂ©ra traditionnelle n'est pas suffisante pour assurer la fonction de localisation. Dans un premier temps, on se propose d'Ă©tudier l'apport d'une camĂ©ra infrarouge thermique.Dans un deuxiĂšme temps, on s'intĂ©ressera Ă  l'utilisation d'une centrale inertielle et d'un GPS dans l'algorithme de SLAM, la centrale aidant Ă  la prĂ©diction du mouvement, et le GPS Ă  la correction des divergences Ă©ventuelles. Enfin, on intĂšgrera dans ce mĂȘme SLAM des pseudo-observations issues de l'appariement entre des segments extraits des images, et ces mĂȘmes segments contenus dans une cartographie stockĂ©e dans une base de donnĂ©es. L'ensemble des observations et pseudo-observations a pour but de localiser le porteur Ă  un mĂštre prĂšs.Les algorithmes devant ĂȘtre portĂ©s sur un FPGA muni d'un processeur de faible puissance par rapport aux PC standard (400 MHz), un co-design devra donc ĂȘtre effectuĂ© entre les Ă©lĂ©ments logiques du FPGA rĂ©alisant le traitement d'images Ă  la volĂ©e et le processeur embarquant le filtre de Kalman Ă©tendu (EKF) pour le SLAM, de maniĂšre Ă  garantir une application temps-rĂ©el Ă  30 Hz. Ces algorithmes spĂ©cialement dĂ©veloppĂ©s pour le co-design et les systĂšmes embarquĂ©s avioniques seront testĂ©s sur la plate-forme robotique du LAAS, puis portĂ©s sur diffĂ©rentes cartes de dĂ©veloppement (Virtex 5, Raspberry, PandaBoard...) en vue de l'Ă©valuation des performancesThe SLAM (Simultaneous Localization and Mapping) problematic is widely studied from years at LAAS. The aimed application is the development of a helping rolling system for planes on airports. This system has to work under any visibility and weather conditions ("SART" project, funding by DGE, with FLIR Systems, ThalĂšs and LatecoĂšre).During some weather conditions (fog, rain, darkness), one only visible camera is not enough to complete this task of SLAM. Firstly, in this thesis, we will study what an infrared camera can bring to SLAM problematic, compared to a visible camera, particularly during hard visible conditions.Secondly, we will focus on using Inertial Measurement Unit (IMU) and GPS into SLAM algorithm, IMU helping on movement prediction, and GPS helping on SLAM correction step. Finally, we will fit in this SLAM algorithm pseudo-observations coming from matching between points retrieved from images, and lines coming from map database. The main objective of the whole system is to localize the vehicle at one meter.These algorithms aimed to work on a FPGA with a low-power processor (400MHz), a co-design between the hardware (processing images on the fly) and the software (embedding an Extended Kalman Filter (EKF) for the SLAM), has to be realized in order to guarantee a real-time application at 30 Hz. These algorithms will be experimented on LAAS robots, then embedded on different boards (Virtex 5, Raspberry Pi, PandaBoard...) for performances evaluationTOULOUSE-INSA-Bib. electronique (315559905) / SudocSudocFranceF

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Cognitive Foundations for Visual Analytics

    Get PDF
    In this report, we provide an overview of scientific/technical literature on information visualization and VA. Topics discussed include an update and overview of the extensive literature search conducted for this study, the nature and purpose of the field, major research thrusts, and scientific foundations. We review methodologies for evaluating and measuring the impact of VA technologies as well as taxonomies that have been proposed for various purposes to support the VA community. A cognitive science perspective underlies each of these discussions

    The benefits of an additional practice in descriptive geomerty course: non obligatory workshop at the Faculty of Civil Engineering in Belgrade

    Get PDF
    At the Faculty of Civil Engineering in Belgrade, in the Descriptive geometry (DG) course, non-obligatory workshops named “facultative task” are held for the three generations of freshman students with the aim to give students the opportunity to get higher final grade on the exam. The content of this workshop was a creative task, performed by a group of three students, offering free choice of a topic, i.e. the geometric structure associated with some real or imagery architectural/art-work object. After the workshops a questionnaire (composed by the professors at the course) is given to the students, in order to get their response on teaching/learning materials for the DG course and the workshop. During the workshop students performed one of the common tests for testing spatial abilities, named “paper folding". Based on the results of the questionnairethe investigation of the linkages between:students’ final achievements and spatial abilities, as well as students’ expectations of their performance on the exam, and how the students’ capacity to correctly estimate their grades were associated with expected and final grades, is provided. The goal was to give an evidence that a creative work, performed by a small group of students and self-assessment of their performances are a good way of helping students to maintain motivation and to accomplish their achievement. The final conclusion is addressed to the benefits of additional workshops employment in the course, which confirmhigherfinal scores-grades, achievement of creative results (facultative tasks) and confirmation of DG knowledge adaption

    The contemporary visualization and modelling technologies and the techniques for the design of the green roofs

    Get PDF
    The contemporary design solutions are merging the boundaries between real and virtual world. The Landscape architecture like the other interdisciplinary field stepped in a contemporary technologies area focused on that, beside the good execution of works, designer solutions has to be more realistic and “touchable”. The opportunities provided by Virtual Reality are certainly not negligible, it is common knowledge that the designs in the world are already presented in this way so the Virtual Reality increasingly used. Following the example of the application of virtual reality in landscape architecture, this paper deals with proposals for the use of virtual reality in landscape architecture so that designers, clients and users would have a virtual sense of scope e.g. rooftop garden, urban areas, parks, roads, etc. It is a programming language that creates a series of images creating a whole, so certain parts can be controlled or even modified in VR. Virtual reality today requires a specific gadget, such as Occulus, HTC Vive, Samsung Gear VR and similar. The aim of this paper is to acquire new theoretical and practical knowledge in the interdisciplinary field of virtual reality, the ability to display using virtual reality methods, and to present through a brief overview the plant species used in the design and construction of an intensive roof garden in a Mediterranean climate, the basic characteristics of roofing gardens as well as the benefits they carry. Virtual and augmented reality as technology is a very powerful tool for landscape architects, when modeling roof gardens, parks, and urban areas. One of the most popular technologies used by landscape architects is Google Tilt Brush, which enables fast modeling. The Google Tilt Brush VR app allows modeling in three-dimensional virtual space using a palette to work with the use of a three dimensional brush. The terms of two "programmed" realities - virtual reality and augmented reality - are often confused. One thing they have in common, though, is VRML - Virtual Reality Modeling Language. In this paper are shown the ways on which this issue can be solved and by the way, get closer the term of Virtual Reality (VR), also all the opportunities which the Virtual reality offered us. As well, in this paper are shown the conditions of Mediterranean climate, the conceptual solution and the plant species which will be used by execution of intensive green roof on the motel “Marković”
    corecore