76,545 research outputs found

    Information fusion for context awareness in intelligent environments

    Get PDF
    The development of intelligent environments requires handling of data perceived from users, received from environments and gathered from objects. Such data is often used to implement machine learning tasks in order to predict actions or to anticipate needs and wills, as well as to provide additional context in applications. Thus, it is often needed to perform operations upon collected data, such as pre-processing, information fusion of sensor data, and manage models from machine learning. These machine learning models may have impact on the performance of platforms and systems used to obtain intelligent environments. In this paper, it is addressed the issue of the development of middleware for intelligent systems, using techniques from information fusion and machine learning that provide context awareness and reduce the impact of information acquisition on both storage and energy efficiency. This discussion is presented in the context of PHESS, a project to ensure energetic sustainability, based on intelligent agents and multi-agent systems, where these techniques are applied

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    Towards Odor-Sensitive Mobile Robots

    Get PDF
    J. Monroy, J. Gonzalez-Jimenez, "Towards Odor-Sensitive Mobile Robots", Electronic Nose Technologies and Advances in Machine Olfaction, IGI Global, pp. 244--263, 2018, doi:10.4018/978-1-5225-3862-2.ch012 VersiĂłn preprint, con permiso del editorOut of all the components of a mobile robot, its sensorial system is undoubtedly among the most critical ones when operating in real environments. Until now, these sensorial systems mostly relied on range sensors (laser scanner, sonar, active triangulation) and cameras. While electronic noses have barely been employed, they can provide a complementary sensory information, vital for some applications, as with humans. This chapter analyzes the motivation of providing a robot with gas-sensing capabilities and also reviews some of the hurdles that are preventing smell from achieving the importance of other sensing modalities in robotics. The achievements made so far are reviewed to illustrate the current status on the three main fields within robotics olfaction: the classification of volatile substances, the spatial estimation of the gas dispersion from sparse measurements, and the localization of the gas source within a known environment

    Multi-Sensor System for Land and Forest Fire Detection Application in Peatland Area

    Get PDF
    Forest fire has a dangerous impact on environments and humans because of haze and carbon emitted from it. A common technology to detect fire hotspots is to use satellite images and then process them to determine the number of hotspots and their location. However, satellite systems cannot penetrate in bad weather or cloudy condition. This research proposes a ground sensor system, which uses several sensors related to the indicators of fire, especially fire in peatland area with unique characteristics. Common parameters of fire, such as temperature, smoke, haze, and carbon dioxide, are applied in this system. Indicators are measured using special sensors. Results of every sensor are analyzed by implementing intelligent computer programming, and an algorithm to determine fire hotspots and locations is applied. The fire hotspot location and intensity determined by integrated multiple sensors are more accurate than those determined by a single sensor. Data collected from every sensor are kept in a database, and a graph is generated for reporting and recording. In case of sensor readings with parameters, potential of fire and hotspots detected can be forwarded to the representative department for corresponding actions

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved
    • …
    corecore