2,067 research outputs found
Discovering user mobility and activity in smart lighting environments
"Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging āInternet-of-Thingsā technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights.
The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset.
The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users.2017-02-17T00:00:00
Mobility increases localizability: A survey on wireless indoor localization using inertial sensors
Wireless indoor positioning has been extensively studied for the past 2 decades and continuously attracted growing research efforts in mobile computing context. As the integration of multiple inertial sensors (e.g., accelerometer, gyroscope, and magnetometer) to nowadays smartphones in recent years, human-centric mobility sensing is emerging and coming into vogue. Mobility information, as a new dimension in addition to wireless signals, can benefit localization in a number of ways, since location and mobility are by nature related in the physical world. In this article, we survey this new trend of mobility enhancing smartphone-based indoor localization. Specifically, we first study how to measure human mobility: what types of sensors we can use and what types of mobility information we can acquire. Next, we discuss how mobility assists localization with respect to enhancing location accuracy, decreasing deployment cost, and enriching location context. Moreover, considering the quality and cost of smartphone built-in sensors, handling measurement errors is essential and accordingly investigated. Combining existing work and our own working experiences, we emphasize the principles and conduct comparative study of the mainstream technologies. Finally, we conclude this survey by addressing future research directions and opportunities in this new and largely open area.</jats:p
Altering User Movement Behaviour in Virtual Environments.
In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments
Pedestrian detection and tracking using stereo vision techniques
Automated pedestrian detection, counting and tracking has received significant attention from the computer vision community of late. Many of the person detection techniques described so far in the literature work well in controlled environments, such as laboratory settings with a small number of people. This allows various assumptions to be made that simplify this complex problem. The performance of these techniques, however, tends to deteriorate when presented with unconstrained environments where pedestrian appearances, numbers, orientations, movements, occlusions and lighting conditions violate these convenient assumptions. Recently, 3D stereo information has been proposed as a technique to overcome some of these issues and to guide pedestrian detection. This thesis presents such an approach, whereby after obtaining robust 3D information via a novel disparity estimation technique, pedestrian detection is performed via a 3D point clustering process within a region-growing framework. This clustering process avoids using hard thresholds by using bio-metrically inspired constraints and a number of plan view statistics. This pedestrian detection technique requires no external training and is able to robustly handle challenging real-world unconstrained environments from various camera positions and orientations. In addition, this thesis presents a continuous detect-and-track approach, with additional kinematic constraints and explicit occlusion analysis, to obtain robust temporal tracking of pedestrians over
time. These approaches are experimentally validated using challenging datasets consisting of both synthetic data and real-world sequences gathered from a number of environments. In each case, the techniques are evaluated using both 2D and 3D groundtruth methodologies
Recent advances in monocular model-based tracking: a systematic literature review
In this paper, we review the advances of monocular model-based tracking for
last ten years period until 2014. In 2005, Lepetit, et. al, [19] reviewed the status
of monocular model based rigid body tracking. Since then, direct 3D tracking has
become quite popular research area, but monocular model-based tracking should
still not be forgotten. We mainly focus on tracking, which could be applied to aug-
mented reality, but also some other applications are covered. Given the wide subject
area this paper tries to give a broad view on the research that has been conducted,
giving the reader an introduction to the diļ¬erent disciplines that are tightly related
to model-based tracking. The work has been conducted by searching through well
known academic search databases in a systematic manner, and by selecting certain
publications for closer examination. We analyze the results by dividing the found
papers into diļ¬erent categories by their way of implementation. The issues which
have not yet been solved are discussed. We also discuss on emerging model-based
methods such as fusing diļ¬erent types of features and region-based pose estimation
which could show the way for future research in this subject.Siirretty Doriast
Recommended from our members
Pedestrian localisation for indoor environments
Ubiquitous computing systems aim to assist us as we go about our daily lives, whilst at the same time fading into the background so that we do not notice their presence. To do this they need to be able to sense their surroundings and infer context about the state of the world. Location has proven to be an important source of contextual information for such systems. If a device can determine its own location then it can infer its surroundings and adapt accordingly.
Of particular interest for many ubiquitous computing systems is the ability to track people in indoor environments. This interest has led to the development of many indoor location systems based on a range of technologies including infra-red light, ultrasound and radio. Unfortunately existing systems that achieve the kind of sub-metre accuracies desired by many location-aware applications require large amounts of infrastructure to be installed into the environment.
This thesis investigates an alternative approach to indoor pedestrian tracking that uses on-body inertial sensors rather than relying on fixed infrastructure. It is demonstrated that general purpose inertial navigation algorithms are unsuitable for pedestrian tracking due to the rapid accumulation of errors in the tracked position. In practice it is necessary to frequently correct such algorithms using additional measurements or constraints. An extended Kalman filter
is developed for this purpose and is applied to track pedestrians using foot-mounted inertial sensors. By detecting when the foot is stationary and applying zero velocity corrections a pedestrianās relative movements can be tracked far more accurately than is possible using uncorrected inertial navigation.
Having developed an effective means of calculating a pedestrianās relative movements, a localisation filter is developed that combines relative movement measurements with environmental constraints derived from a map of the environment. By enforcing constraints such as impassable walls and floors the filter is able to narrow down the absolute position of a pedestrian as they move through an indoor environment. Once the userās position has been uniquely determined the same filter is demonstrated to track the userās absolute position to sub-metre accuracy.
The localisation filter in its simplest form is computationally expensive. Furthermore symmetry exhibited by the environment may delay or prevent the filter from determining the userās position. The final part of this thesis describes the concept of assisted localisation, in which additional measurements are used to solve both of these problems. The use of sparsely deployed WiFi access points is discussed in detail.
The thesis concludes that inertial sensors can be used to track pedestrians in indoor environments. Such an approach is suited to cases in which it is impossible or impractical to install large amounts of fixed infrastructure into the environment in advance
- ā¦