2,467 research outputs found

    Characterization of Road Condition with Data Mining Based on Measured Kinematic Vehicle Parameters

    Get PDF
    This work aims at classifying the road condition with data mining methods using simple acceleration sensors and gyroscopes installed in vehicles. Two classifiers are developed with a support vector machine (SVM) to distinguish between different types of road surfaces, such as asphalt and concrete, and obstacles, such as potholes or railway crossings. From the sensor signals, frequency-based features are extracted, evaluated automatically with MANOVA. The selected features and their meaning to predict the classes are discussed. The best features are used for designing the classifiers. Finally, the methods, which are developed and applied in this work, are implemented in a Matlab toolbox with a graphical user interface. The toolbox visualizes the classification results on maps, thus enabling manual verification of the results. The accuracy of the cross-validation of classifying obstacles yields 81.0% on average and of classifying road material 96.1% on average. The results are discussed on a comprehensive exemplary data set

    A Radar-Enabled Collaborative Sensor Networking Integrating COTS Technology for Surveillance and Tracking

    Get PDF
    The feasibility of using Commercial Off-The-Shelf (COTS) sensor nodes is studied in a distributed network, aiming at dynamic surveillance and tracking of ground targets. Data acquisition by low-cost (\u3c$50 US) miniature low-power radar through a wireless mote is described. We demonstrate the detection, ranging and velocity estimation, classification and tracking capabilities of the mini-radar, and compare results to simulations and manual measurements. Furthermore, we supplement the radar output with other sensor modalities, such as acoustic and vibration sensors. This method provides innovative solutions for detecting, identifying, and tracking vehicles and dismounts over a wide area in noisy conditions. This study presents a step towards distributed intelligent decision support and demonstrates effectiveness of small cheap sensors, which can complement advanced technologies in certain real-life scenarios

    High Accuracy Distributed Target Detection and Classification in Sensor Networks Based on Mobile Agent Framework

    Get PDF
    High-accuracy distributed information exploitation plays an important role in sensor networks. This dissertation describes a mobile-agent-based framework for target detection and classification in sensor networks. Specifically, we tackle the challenging problems of multiple- target detection, high-fidelity target classification, and unknown-target identification. In this dissertation, we present a progressive multiple-target detection approach to estimate the number of targets sequentially and implement it using a mobile-agent framework. To further improve the performance, we present a cluster-based distributed approach where the estimated results from different clusters are fused. Experimental results show that the distributed scheme with the Bayesian fusion method have better performance in the sense that they have the highest detection probability and the most stable performance. In addition, the progressive intra-cluster estimation can reduce data transmission by 83.22% and conserve energy by 81.64% compared to the centralized scheme. For collaborative target classification, we develop a general purpose multi-modality, multi-sensor fusion hierarchy for information integration in sensor networks. The hierarchy is com- posed of four levels of enabling algorithms: local signal processing, temporal fusion, multi-modality fusion, and multi-sensor fusion using a mobile-agent-based framework. The fusion hierarchy ensures fault tolerance and thus generates robust results. In the meanwhile, it also takes into account energy efficiency. Experimental results based on two field demos show constant improvement of classification accuracy over different levels of the hierarchy. Unknown target identification in sensor networks corresponds to the capability of detecting targets without any a priori information, and of modifying the knowledge base dynamically. In this dissertation, we present a collaborative method to solve this problem among multiple sensors. When applied to the military vehicles data set collected in a field demo, about 80% unknown target samples can be recognized correctly, while the known target classification ac- curacy stays above 95%

    An Ontological Approach to Inform HMI Designs for Minimizing Driver Distractions with ADAS

    Get PDF
    ADAS (Advanced Driver Assistance Systems) are in-vehicle systems designed to enhance driving safety and efficiency as well as comfort for drivers in the driving process. Recent studies have noticed that when Human Machine Interface (HMI) is not designed properly, an ADAS can cause distraction which would affect its usage and even lead to safety issues. Current understanding of these issues is limited to the context-dependent nature of such systems. This paper reports the development of a holistic conceptualisation of how drivers interact with ADAS and how such interaction could lead to potential distraction. This is done taking an ontological approach to contextualise the potential distraction, driving tasks and user interactions centred on the use of ADAS. Example scenarios are also given to demonstrate how the developed ontology can be used to deduce rules for identifying distraction from ADAS and informing future designs

    A New Terrain Classification Framework Using Proprioceptive Sensors for Mobile Robots

    Get PDF
    Mobile robots that operate in real-world environments interact with the surroundings to generate complex acoustics and vibration signals, which carry rich information about the terrain. This paper presents a new terrain classification framework that utilizes both acoustics and vibration signals resulting from the robot-terrain interaction. As an alternative to handcrafted domain-specific feature extraction, a two-stage feature selection method combining ReliefF and mRMR algorithms was developed to select optimal feature subsets that carry more discriminative information. As different data sources can provide complementary information, a multiclassifier combination method was proposed by considering a priori knowledge and fusing predictions from five data sources: one acoustic data source and four vibration data sources. In this study, four conceptually different classifiers were employed to perform the classification, each with a different number of optimal features. Signals were collected using a tracked robot moving at three different speeds on six different terrains. The new framework successfully improved classification performance of different classifiers using the newly developed optimal feature subsets. The greater improvement was observed for robot traversing at lower speeds

    Biorefarmeries: Milking ethanol from algae for the mobility of tomorrow

    Get PDF
    The idea of this project is to fully exploit microalgae to the best of its potential, possibly proposing a sort of fourth generation fuel based on a continuous milking of macro- and microorganisms (as cows in a milk farm), which produce fuel by photosynthetic reactions. This project proposes a new transportation concept supported by a new socio-economic approach, in which biofuel production is based on biorefarmeries delivering fourth generation fuels which also have decarbonization capabilities, potential negative CO2 emissions plus positive impacts on mobility, the automotive Industry, health and environment and the econom

    Integrating Haptic Feedback into Mobile Location Based Services

    Get PDF
    Haptics is a feedback technology that takes advantage of the human sense of touch by applying forces, vibrations, and/or motions to a haptic-enabled device such as a mobile phone. Historically, human-computer interaction has been visual - text and images on the screen. Haptic feedback can be an important additional method especially in Mobile Location Based Services such as knowledge discovery, pedestrian navigation and notification systems. A knowledge discovery system called the Haptic GeoWand is a low interaction system that allows users to query geo-tagged data around them by using a point-and-scan technique with their mobile device. Haptic Pedestrian is a navigation system for walkers. Four prototypes have been developed classified according to the user’s guidance requirements, the user type (based on spatial skills), and overall system complexity. Haptic Transit is a notification system that provides spatial information to the users of public transport. In all these systems, haptic feedback is used to convey information about location, orientation, density and distance by use of the vibration alarm with varying frequencies and patterns to help understand the physical environment. Trials elicited positive responses from the users who see benefit in being provided with a “heads up” approach to mobile navigation. Results from a memory recall test show that the users of haptic feedback for navigation had better memory recall of the region traversed than the users of landmark images. Haptics integrated into a multi-modal navigation system provides more usable, less distracting but more effective interaction than conventional systems. Enhancements to the current work could include integration of contextual information, detailed large-scale user trials and the exploration of using haptics within confined indoor spaces

    Computer Vision Algorithms for Mobile Camera Applications

    Get PDF
    Wearable and mobile sensors have found widespread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, including accelerometer, gyroscope, magnetometer, microphone and camera, it has become more feasible to develop algorithms for activity monitoring, guidance and navigation of unmanned vehicles, autonomous driving and driver assistance, by using data from one or more of these sensors. In this thesis, we focus on multiple mobile camera applications, and present lightweight algorithms suitable for embedded mobile platforms. The mobile camera scenarios presented in the thesis are: (i) activity detection and step counting from wearable cameras, (ii) door detection for indoor navigation of unmanned vehicles, and (iii) traffic sign detection from vehicle-mounted cameras. First, we present a fall detection and activity classification system developed for embedded smart camera platform CITRIC. In our system, the camera platform is worn by the subject, as opposed to static sensors installed at fixed locations in certain rooms, and, therefore, monitoring is not limited to confined areas, and extends to wherever the subject may travel including indoors and outdoors. Next, we present a real-time smart phone-based fall detection system, wherein we implement camera and accelerometer based fall-detection on Samsung Galaxy S™ 4. We fuse these two sensor modalities to have a more robust fall detection system. Then, we introduce a fall detection algorithm with autonomous thresholding using relative-entropy within the class of Ali-Silvey distance measures. As another wearable camera application, we present a footstep counting algorithm using a smart phone camera. This algorithm provides more accurate step-count compared to using only accelerometer data in smart phones and smart watches at various body locations. As a second mobile camera scenario, we study autonomous indoor navigation of unmanned vehicles. A novel approach is proposed to autonomously detect and verify doorway openings by using the Google Project Tango™ platform. The third mobile camera scenario involves vehicle-mounted cameras. More specifically, we focus on traffic sign detection from lower-resolution and noisy videos captured from vehicle-mounted cameras. We present a new method for accurate traffic sign detection, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of providing much faster training and testing, and comparable or better performance, with respect to deep neural network approaches, without requiring specialized processors. Proposed computer vision algorithms provide promising results for various useful applications despite the limited energy and processing capabilities of mobile devices

    FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space

    Full text link
    This paper proposes a novel contrastive learning framework, called FOCAL, for extracting comprehensive features from multimodal time-series sensing signals through self-supervised training. Existing multimodal contrastive frameworks mostly rely on the shared information between sensory modalities, but do not explicitly consider the exclusive modality information that could be critical to understanding the underlying sensing physics. Besides, contrastive frameworks for time series have not handled the temporal information locality appropriately. FOCAL solves these challenges by making the following contributions: First, given multimodal time series, it encodes each modality into a factorized latent space consisting of shared features and private features that are orthogonal to each other. The shared space emphasizes feature patterns consistent across sensory modalities through a modal-matching objective. In contrast, the private space extracts modality-exclusive information through a transformation-invariant objective. Second, we propose a temporal structural constraint for modality features, such that the average distance between temporally neighboring samples is no larger than that of temporally distant samples. Extensive evaluations are performed on four multimodal sensing datasets with two backbone encoders and two classifiers to demonstrate the superiority of FOCAL. It consistently outperforms the state-of-the-art baselines in downstream tasks with a clear margin, under different ratios of available labels. The code and self-collected dataset are available at https://github.com/tomoyoshki/focal.Comment: Code available at: [github](https://github.com/tomoyoshki/focal
    • …
    corecore