1,657 research outputs found

    Pedestrian Mobility Mining with Movement Patterns

    Get PDF
    In street-based mobility mining, pedestrian volume estimation receives increasing attention, as it provides important applications such as billboard evaluation, attraction ranking and emergency support systems. In practice, empirical measurements are sparse due to budget limitations and constrained mounting options. Therefore, estimation of pedestrian quantity is required to perform pedestrian mobility analysis at unobserved locations. Accurate pedestrian mobility analysis is difficult to achieve due to the non-random path selection of individual pedestrians (resulting from motivated movement behaviour), causing the pedestrian volumes to distribute non-uniformly among the traffic network. Existing approaches (pedestrian simulations and data mining methods) are hard to adjust to sensor measurements or require more expensive input data (e.g. high fidelity floor plans or total number of pedestrians in the site) and are thus unfeasible. In order to achieve a mobility model that encodes pedestrian volumes accurately, we propose two methods under the regression framework which overcome the limitations of existing methods. Namely, these two methods incorporate not just topological information and episodic sensor readings, but also prior knowledge on movement preferences and movement patterns. The first one is based on Least Squares Regression (LSR). The advantage of this method is the easy inclusion of route choice heuristics and robustness towards contradicting measurements. The second method is Gaussian Process Regression (GPR). The advantages of this method are the possibilities to include expert knowledge on pedestrian movement and to estimate the uncertainty in predicting the unknown frequencies. Furthermore the kernel matrix of the pedestrian frequencies returned by the method supports sensor placement decisions. Major benefits of the regression approach are (1) seamless integration of expert data and (2) simple reproduction of sensor measurements. Further advantages are (3) invariance of the results against traffic network homeomorphism and (4) the computational complexity depends not on the number of modeled pedestrians but on the traffic network complexity. We compare our novel approaches to state-of-the-art pedestrian simulation (Generalized Centrifugal Force Model) as well as existing Data Mining methods for traffic volume estimation (Spatial k-Nearest Neighbour) and commonly used graph kernels for the Gaussian Process Regression (Squared Exponential, Regularized Laplacian and Diffusion Kernel) in terms of prediction performance (measured with mean absolute error). Our methods showed significantly lower error rates. Since pattern knowledge is not easy to obtain, we present algorithms for pattern acquisition and analysis from Episodic Movement Data. The proposed analysis of Episodic Movement Data involve spatio-temporal aggregation of visits and flows, cluster analyses and dependency models. For pedestrian mobility data collection we further developed and successfully applied the recently evolved Bluetooth tracking technology. The introduced methods are combined to a system for pedestrian mobility analysis which comprises three layers. The Sensor Layer (1) monitors geo-coded sensor recordings on people’s presence and hands this episodic movement data in as input to the next layer. By use of standardized Open Geographic Consortium (OGC) compliant interfaces for data collection, we support seamless integration of various sensor technologies depending on the application requirements. The Query Layer (2) interacts with the user, who could ask for analyses within a given region and a certain time interval. Results are returned to the user in OGC conform Geography Markup Language (GML) format. The user query triggers the (3) Analysis Layer which utilizes the mobility model for pedestrian volume estimation. The proposed approach is promising for location performance evaluation and attractor identification. Thus, it was successfully applied to numerous industrial applications: Zurich central train station, the zoo of Duisburg (Germany) and a football stadium (Stade des Costières Nîmes, France)

    Managing big data experiments on smartphones

    Get PDF
    The explosive number of smartphones with ever growing sensing and computing capabilities have brought a paradigm shift to many traditional domains of the computing field. Re-programming smartphones and instrumenting them for application testing and data gathering at scale is currently a tedious and time-consuming process that poses significant logistical challenges. Next generation smartphone applications are expected to be much larger-scale and complex, demanding that these undergo evaluation and testing under different real-world datasets, devices and conditions. In this paper, we present an architecture for managing such large-scale data management experiments on real smartphones. We particularly present the building blocks of our architecture that encompassed smartphone sensor data collected by the crowd and organized in our big data repository. The given datasets can then be replayed on our testbed comprising of real and simulated smartphones accessible to developers through a web-based interface. We present the applicability of our architecture through a case study that involves the evaluation of individual components that are part of a complex indoor positioning system for smartphones, coined Anyplace, which we have developed over the years. The given study shows how our architecture allows us to derive novel insights into the performance of our algorithms and applications, by simplifying the management of large-scale data on smartphones

    Spatial and Temporal Modeling for Human Activity Recognition from Multimodal Sequential Data

    Get PDF
    Human Activity Recognition (HAR) has been an intense research area for more than a decade. Different sensors, ranging from 2D and 3D cameras to accelerometers, gyroscopes, and magnetometers, have been employed to generate multimodal signals to detect various human activities. With the advancement of sensing technology and the popularity of mobile devices, depth cameras and wearable devices, such as Microsoft Kinect and smart wristbands, open a unprecedented opportunity to solve the challenging HAR problem by learning expressive representations from the multimodal signals recording huge amounts of daily activities which comprise a rich set of categories. Although competitive performance has been reported, existing methods focus on the statistical or spatial representation of the human activity sequence; while the internal temporal dynamics of the human activity sequence are not sufficiently exploited. As a result, they often face the challenge of recognizing visually similar activities composed of dynamic patterns in different temporal order. In addition, many model-driven methods based on sophisticated features and carefully-designed classifiers are computationally demanding and unable to scale to a large dataset. In this dissertation, we propose to address these challenges from three different perspectives; namely, 3D spatial relationship modeling, dynamic temporal quantization, and temporal order encoding. We propose a novel octree-based algorithm for computing the 3D spatial relationships between objects from a 3D point cloud captured by a Kinect sensor. A set of 26 3D spatial directions are defined to describe the spatial relationship of an object with respect to a reference object. These 3D directions are implemented as a set of spatial operators, such as AboveSouthEast and BelowNorthWest, of an event query language to query human activities in an indoor environment; for example, A person walks in the hallway from north to south. The performance is quantitatively evaluated in a public RGBD object dataset and qualitatively investigated in a live video computing platform. In order to address the challenge of temporal modeling in human action recognition, we introduce the dynamic temporal quantization, a clustering-like algorithm to quantize human action sequences of varied lengths into fixed-size quantized vectors. A two-step optimization algorithm is proposed to jointly optimize the quantization of the original sequence. In the aggregation step, frames falling into the sample segment are aggregated by max-polling and produce the quantized representation of the segment. During the assignment step, frame-segment assignment is updated according to dynamic time warping, while the temporal order of the entire sequence is preserved. The proposed technique is evaluated on three public 3D human action datasets and achieves state-of-the-art performance. Finally, we propose a novel temporal order encoding approach that models the temporal dynamics of the sequential data for human activity recognition. The algorithm encodes the temporal order of the latent patterns extracted by the subspace projection and generates a highly compact First-Take-All (FTA) feature vector representing the entire sequential data. An optimization algorithm is further introduced to learn the optimized projections in order to increase the discriminative power of the FTA feature. The compactness of the FTA feature makes it extremely efficient for human activity recognition with nearest neighbor search based on Hamming distance. Experimental results on two public human activity datasets demonstrate the advantages of the FTA feature over state-of-the-art methods in both accuracy and efficiency

    From data acquisition to data fusion : a comprehensive review and a roadmap for the identification of activities of daily living using mobile devices

    Get PDF
    This paper focuses on the research on the state of the art for sensor fusion techniques, applied to the sensors embedded in mobile devices, as a means to help identify the mobile device user’s daily activities. Sensor data fusion techniques are used to consolidate the data collected from several sensors, increasing the reliability of the algorithms for the identification of the different activities. However, mobile devices have several constraints, e.g., low memory, low battery life and low processing power, and some data fusion techniques are not suited to this scenario. The main purpose of this paper is to present an overview of the state of the art to identify examples of sensor data fusion techniques that can be applied to the sensors available in mobile devices aiming to identify activities of daily living (ADLs)

    RFID Localisation For Internet Of Things Smart Homes: A Survey

    Full text link
    The Internet of Things (IoT) enables numerous business opportunities in fields as diverse as e-health, smart cities, smart homes, among many others. The IoT incorporates multiple long-range, short-range, and personal area wireless networks and technologies into the designs of IoT applications. Localisation in indoor positioning systems plays an important role in the IoT. Location Based IoT applications range from tracking objects and people in real-time, assets management, agriculture, assisted monitoring technologies for healthcare, and smart homes, to name a few. Radio Frequency based systems for indoor positioning such as Radio Frequency Identification (RFID) is a key enabler technology for the IoT due to its costeffective, high readability rates, automatic identification and, importantly, its energy efficiency characteristic. This paper reviews the state-of-the-art RFID technologies in IoT Smart Homes applications. It presents several comparable studies of RFID based projects in smart homes and discusses the applications, techniques, algorithms, and challenges of adopting RFID technologies in IoT smart home systems.Comment: 18 pages, 2 figures, 3 table
    • …
    corecore