49 research outputs found

    Improving the Efficacy of Context-Aware Applications

    Get PDF
    In this dissertation, we explore methods for enhancing the context-awareness capabilities of modern computers, including mobile devices, tablets, wearables, and traditional computers. Advancements include proposed methods for fusing information from multiple logical sensors, localizing nearby objects using depth sensors, and building models to better understand the content of 2D images. First, we propose a system called Unagi, designed to incorporate multiple logical sensors into a single framework that allows context-aware application developers to easily test new ideas and create novel experiences. Unagi is responsible for collecting data, extracting features, and building personalized models for each individual user. We demonstrate the utility of the system with two applications: adaptive notification filtering and a network content prefetcher. We also thoroughly evaluate the system with respect to predictive accuracy, temporal delay, and power consumption. Next, we discuss a set of techniques that can be used to accurately determine the location of objects near a user in 3D space using a mobile device equipped with both depth and inertial sensors. Using a novel chaining approach, we are able to locate objects farther away than the standard range of the depth sensor without compromising localization accuracy. Empirical testing shows our method is capable of localizing objects 30m from the user with an error of less than 10cm. Finally, we demonstrate a set of techniques that allow a multi-layer perceptron (MLP) to learn resolution-invariant representations of 2D images, including the proposal of an MCMC-based technique to improve the selection of pixels for mini-batches used for training. We also show that a deep convolutional encoder could be trained to output a resolution-independent representation in constant time, and we discuss several potential applications of this research, including image resampling, image compression, and security

    RGB-W: When Vision Meets Wireless

    Get PDF
    Inspired by the recent success of RGB-D cameras, we propose the enrichment of RGB data with an additional "quasi-free" modality, namely, the wireless signal (e.g., wifi or Bluetooth) emitted by individuals' cell phones, referred to as RGB-W. The received signal strength acts as a rough proxy for depth and a reliable cue on their identity. Although the measured signals are highly noisy (more than 2m average localization error), we demonstrate that the combination of visual and wireless data significantly improves the localization accuracy. We introduce a novel image-driven representation of wireless data which embeds all received signals onto a single image. We then indicate the ability of this additional data to (i) locate persons within a sparsity-driven framework and to (ii) track individuals with a new confidence measure on the data association problem. Our solution outperforms existing localization methods by a significant margin. It can be applied to the millions of currently installed RGB cameras to better analyze human behavior and offer the next generation of high-accuracy location-based services

    RGB-W: When Vision Meets Wireless

    Full text link

    Location estimation and collective inference in indoor spaces using smartphones

    Get PDF
    In the last decade, indoor localization-based smart, innovative services have become very popular in public spaces (retail spaces, malls, museums, and warehouses). We have state-of-art RSSI techniques to more accurate CSI techniques to infer indoor location. Since the past year, the pandemic has raised an important challenge of determining if a pair of individuals are ``social-distancing,'' separated by more than 6ft. Most solutions have used `presence'-if one device can hear another--- which is a poor proxy for distance since devices can be heard well beyond 6 ft social distancing radius and across aisles and walls. Here we ask the key question: what needs to be added to our current indoor localization solutions to deploy them towards scenarios like reliable contact tracing solutions easily. And we identified three main limitations---deployability, accuracy, and privacy. Location solutions need to deploy on ubiquitous devices like smartphones. They should be accurate under different environmental conditions. The solutions need to respect a person's privacy settings. Our main contributions are twofold -a new statistical feature for localization, Packet Reception Probability (PRP) which correlates with distance and is different from other physical measures of distance like CSI or RSSI. PRP can easily deploy on smartphones (unlike CSI) and is more accurate than RSSI. Second, we develop a crowd tool to audit the level of location surveillance in space which is the first step towards achieving privacy. Specifically, we first solve a location estimation problem with the help of infrastructure devices (mainly Bluetooth Low Energy or BLE devices). BLE has turned out to be a key contact tracing technology during the pandemic. We have identified three fundamental limitations with BLE RSSI---biased RSSI Estimates due to packet loss, mean RSSI de-correlated with distance due to high packet loss in BLE, and well-known multipath effects. We built the new localization feature, Packet Reception Probability (PRP), to solve the packet loss problem in RSSI. PRP measures the probability that a receiver successfully receives packets from the transmitter. We have shown through empirical experiments that PRP encodes distance. We also incorporated a new stack-based model of multipath in our framework. We have evaluated B-PRP in two real-world public places, an academic library setting and a real-world retail store. PRP gives significantly lower errors than RSSI. Fusion of PRP and RSSI further improves the overall localization accuracy over PRP. Next, we solved a peer-to-peer distance estimation problem that uses minimal infrastructure. Most apps like aarogya setu, bluetrace have solved peer-to-peer distances through the presence of Bluetooth Low-Energy (BLE) signals. Apps that rely on pairwise measurements like RSSI suffer from latent factors like device relative positioning on the human body, the orientation of the people carrying the devices, and the environmental multipath effect. We have proposed two solutions in this work---using known distances and collaboration to solve distances more robustly. First, if we have a few infrastructure devices installed at known locations in an environment, we can make more measurements with the devices. We can also use the known distances between the devices to constrain the unknown distances in a triangle inequality framework. Second, in an outdoor environment where we cannot install infrastructure devices, we can collaborate between people to jointly constrain many unknown distances. Finally, we solve a collaborative tracking estimation problem where people audit the properties of localization infrastructure. While people want services, they do not want to be surveilled. Further, people using an indoor location system do not know the current surveillance level. The granularity of the location information that the system collects about people depends on the nature of the infrastructure. Our system, the CrowdEstimator, provides a tool to people to harness their collective power and collect traces for inferring the level of surveillance. We further propose the insight that surveillance is not a single number, instead of a spatial map. We introduce active learning algorithms to infer all parts of the spatial map with uniform accuracy. Auditing the location infrastructure is the first step towards achieving the bigger goal of declarative privacy, where a person can specify their comfortable level of surveillance

    Wi-Fi based people tracking in challenging environments

    Get PDF
    People tracking is a key building block in many applications such as abnormal activity detection, gesture recognition, and elderly persons monitoring. Video-based systems have many limitations making them ineffective in many situations. Wi-Fi provides an easily accessible source of opportunity for people tracking that does not have the limitations of video-based systems. The system will detect, localise, and track people, based on the available Wi-Fi signals that are reflected from their bodies. Wi-Fi based systems still need to address some challenges in order to be able to operate in challenging environments. Some of these challenges include the detection of the weak signal, the detection of abrupt people motion, and the presence of multipath propagation. In this thesis, these three main challenges will be addressed. Firstly, a weak signal detection method that uses the changes in the signals that are reflected from static objects, to improve the detection probability of weak signals that are reflected from the person’s body. Then, a deep learning based Wi-Fi localisation technique is proposed that significantly improves the runtime and the accuracy in comparison with existing techniques. After that, a quantum mechanics inspired tracking method is proposed to address the abrupt motion problem. The proposed method uses some interesting phenomena in the quantum world, where the person is allowed to exist at multiple positions simultaneously. The results show a significant improvement in reducing the tracking error and in reducing the tracking delay

    Energy-efficient Continuous Context Sensing on Mobile Phones

    Get PDF
    With the ever increasing adoption of smartphones worldwide, researchers have found the perfect sensor platform to perform context-based research and to prepare for context-based services to be also deployed for the end-users. However, continuous context sensing imposes a considerable challenge in balancing the energy consumption of the sensors, the accuracy of the recognized context and its latency. After outlining the common characteristics of continuous sensing systems, we present a detailed overview of the state of the art, from sensors sub-systems to context inference algorithms. Then, we present the three main contribution of this thesis. The first approach we present is based on the use of local communications to exchange sensing information with neighboring devices. As proximity, location and environmental information can be obtained from nearby smartphones, we design a protocol for synchronizing the exchanges and fairly distribute the sensing tasks. We show both theoretically and experimentally the reduction in energy needed when the devices can collaborate. The second approach focuses on the way to schedule mobile sensors, optimizing for both the accuracy and energy needs. We formulate the optimal sensing problem as a decision problem and propose a two-tier framework for approximating its solution. The first tier is responsible for segmenting the sensor measurement time series, by fitting various models. The second tier takes care of estimating the optimal sampling, selecting the measurements that contributes the most to the model accuracy. We provide near-optimal heuristics for both tiers and evaluate their performances using environmental sensor data. In the third approach we propose an online algorithm that identifies repeated patterns in time series and produces a compressed symbolic stream. The first symbolic transformation is based on clustering with the raw sensor data. Whereas the next iterations encode repetitive sequences of symbols into new symbols. We define also a metric to evaluate the symbolization methods with regard to their capacity at preserving the systems' states. We also show that the output of symbols can be used directly for various data mining tasks, such as classification or forecasting, without impacting much the accuracy, but greatly reducing the complexity and running time. In addition, we also present an example of application, assessing the user's exposure to air pollutants, which demonstrates the many opportunities to enhance contextual information when fusing sensor data from different sources. On one side we gather fine grained air quality information from mobile sensor deployments and aggregate them with an interpolation model. And, on the other side, we continuously capture the user's context, including location, activity and surrounding air quality. We also present the various models used for fusing all these information in order to produce the exposure estimation

    Context Awareness for Navigation Applications

    Get PDF
    This thesis examines the topic of context awareness for navigation applications and asks the question, “What are the benefits and constraints of introducing context awareness in navigation?” Context awareness can be defined as a computer’s ability to understand the situation or context in which it is operating. In particular, we are interested in how context awareness can be used to understand the navigation needs of people using mobile computers, such as smartphones, but context awareness can also benefit other types of navigation users, such as maritime navigators. There are countless other potential applications of context awareness, but this thesis focuses on applications related to navigation. For example, if a smartphone-based navigation system can understand when a user is walking, driving a car, or riding a train, then it can adapt its navigation algorithms to improve positioning performance. We argue that the primary set of tools available for generating context awareness is machine learning. Machine learning is, in fact, a collection of many different algorithms and techniques for developing “computer systems that automatically improve their performance through experience” [1]. This thesis examines systematically the ability of existing algorithms from machine learning to endow computing systems with context awareness. Specifically, we apply machine learning techniques to tackle three different tasks related to context awareness and having applications in the field of navigation: (1) to recognize the activity of a smartphone user in an indoor office environment, (2) to recognize the mode of motion that a smartphone user is undergoing outdoors, and (3) to determine the optimal path of a ship traveling through ice-covered waters. The diversity of these tasks was chosen intentionally to demonstrate the breadth of problems encompassed by the topic of context awareness. During the course of studying context awareness, we adopted two conceptual “frameworks,” which we find useful for the purpose of solidifying the abstract concepts of context and context awareness. The first such framework is based strongly on the writings of a rhetorician from Hellenistic Greece, Hermagoras of Temnos, who defined seven elements of “circumstance”. We adopt these seven elements to describe contextual information. The second framework, which we dub the “context pyramid” describes the processing of raw sensor data into contextual information in terms of six different levels. At the top of the pyramid is “rich context”, where the information is expressed in prose, and the goal for the computer is to mimic the way that a human would describe a situation. We are still a long way off from computers being able to match a human’s ability to understand and describe context, but this thesis improves the state-of-the-art in context awareness for navigation applications. For some particular tasks, machine learning has succeeded in outperforming humans, and in the future there are likely to be tasks in navigation where computers outperform humans. One example might be the route optimization task described above. This is an example of a task where many different types of information must be fused in non-obvious ways, and it may be that computer algorithms can find better routes through ice-covered waters than even well-trained human navigators. This thesis provides only preliminary evidence of this possibility, and future work is needed to further develop the techniques outlined here. The same can be said of the other two navigation-related tasks examined in this thesis

    Kompensation positionsbezogener Artefakte in Aktivitätserkennung

    Get PDF
    This thesis investigates, how placement variations of electronic devices influence the possibility of using sensors integrated in those devices for context recognition. The vast majority of context recognition research assumes well defined, fixed sen- sor locations. Although this might be acceptable for some application domains (e.g. in an industrial setting), users, in general, will have a hard time coping with these limitations. If one needs to remember to carry dedicated sensors and to adjust their orientation from time to time, the activity recognition system is more distracting than helpful. How can we deal with device location and orientation changes to make context sensing mainstream? This thesis presents a systematic evaluation of device placement effects in context recognition. We first deal with detecting if a device is carried on the body or placed somewhere in the environ- ment. If the device is placed on the body, it is useful to know on which body part. We also address how to deal with sensors changing their position and their orientation during use. For each of these topics some highlights are given in the following. Regarding environmental placement, we introduce an active sampling ap- proach to infer symbolic object location. This approach requires only simple sensors (acceleration, sound) and no infrastructure setup. The method works for specific placements such as "on the couch", "in the desk drawer" as well as for general location classes, such as "closed wood compartment" or "open iron sur- face". In the experimental evaluation we reach a recognition accuracy of 90% and above over a total of over 1200 measurements from 35 specific locations (taken from 3 different rooms) and 12 abstract location classes. To derive the coarse device placement on the body, we present a method solely based on rotation and acceleration signals from the device. It works independent of the device orientation. The on-body placement recognition rate is around 80% over 4 min. of unconstrained motion data for the worst scenario and up to 90% over a 2 min. interval for the best scenario. We use over 30 hours of motion data for the analysis. Two special issues of device placement are orientation and displacement. This thesis proposes a set of heuristics that significantly increase the robustness of motion sensor-based activity recognition with respect to sen- sor displacement. We show how, within certain limits and with modest quality degradation, motion sensor-based activity recognition can be implemented in a displacement tolerant way. We evaluate our heuristics first on a set of synthetic lower arm motions which are well suited to illustrate the strengths and limits of our approach, then on an extended modes of locomotion problem (sensors on the upper leg) and finally on a set of exercises performed on various gym machines (sensors placed on the lower arm). In this example our heuristic raises the dis- placed recognition rate from 24% for a displaced accelerometer, which had 96% recognition when not displaced, to 82%

    Statistical Filtering for Multimodal Mobility Modeling in Cyber Physical Systems

    Get PDF
    A Cyber-Physical System integrates computations and dynamics of physical processes. It is an engineering discipline focused on technology with a strong foundation in mathematical abstractions. It shares many of these abstractions with engineering and computer science, but still requires adaptation to suit the dynamics of the physical world. In such a dynamic system, mobility management is one of the key issues against developing a new service. For example, in the study of a new mobile network, it is necessary to simulate and evaluate a protocol before deployment in the system. Mobility models characterize mobile agent movement patterns. On the other hand, they describe the conditions of the mobile services. The focus of this thesis is on mobility modeling in cyber-physical systems. A macroscopic model that captures the mobility of individuals (people and vehicles) can facilitate an unlimited number of applications. One fundamental and obvious example is traffic profiling. Mobility in most systems is a dynamic process and small non-linearities can lead to substantial errors in the model. Extensive research activities on statistical inference and filtering methods for data modeling in cyber-physical systems exist. In this thesis, several methods are employed for multimodal data fusion, localization and traffic modeling. A novel energy-aware sparse signal processing method is presented to process massive sensory data. At baseline, this research examines the application of statistical filters for mobility modeling and assessing the difficulties faced in fusing massive multi-modal sensory data. A statistical framework is developed to apply proposed methods on available measurements in cyber-physical systems. The proposed methods have employed various statistical filtering schemes (i.e., compressive sensing, particle filtering and kernel-based optimization) and applied them to multimodal data sets, acquired from intelligent transportation systems, wireless local area networks, cellular networks and air quality monitoring systems. Experimental results show the capability of these proposed methods in processing multimodal sensory data. It provides a macroscopic mobility model of mobile agents in an energy efficient way using inconsistent measurements
    corecore