1,342 research outputs found

    Self-localizing Smart Cameras and Their Applications

    Get PDF
    As the prices of cameras and computing elements continue to fall, it has become increasingly attractive to consider the deployment of smart camera networks. These networks would be composed of small, networked computers equipped with inexpensive image sensors. Such networks could be employed in a wide range of applications including surveillance, robotics and 3D scene reconstruction. One critical problem that must be addressed before such systems can be deployed effectively is the issue of localization. That is, in order to take full advantage of the images gathered from multiple vantage points it is helpful to know how the cameras in the scene are positioned and oriented with respect to each other. To address the localization problem we have proposed a novel approach to localizing networks of embedded cameras and sensors. In this scheme the cameras and the nodes are equipped with controllable light sources (either visible or infrared) which are used for signaling. Each camera node can then automatically determine the bearing to all the nodes that are visible from its vantage point. By fusing these measurements with the measurements obtained from onboard accelerometers, the camera nodes are able to determine the relative positions and orientations of other nodes in the network. This localization technology can serve as a basic capability on which higher level applications can be built. The method could be used to automatically survey the locations of sensors of interest, to implement distributed surveillance systems or to analyze the structure of a scene based on the images obtained from multiple registered vantage points. It also provides a mechanism for integrating the imagery obtained from the cameras with the measurements obtained from distributed sensors. We have successfully used our custom made self localizing smart camera networks to implement a novel decentralized target tracking algorithm, create an ad-hoc range finder and localize the components of a self assembling modular robot

    Optical Camera Communications: Principles, Modulations, Potential and Challenges

    Get PDF
    Optical wireless communications (OWC) are emerging as cost-effective and practical solutions to the congested radio frequency-based wireless technologies. As part of OWC, optical camera communications (OCC) have become very attractive, considering recent developments in cameras and the use of fitted cameras in smart devices. OCC together with visible light communications (VLC) is considered within the framework of the IEEE 802.15.7m standardization. OCCs based on both organic and inorganic light sources as well as cameras are being considered for low-rate transmissions and localization in indoor as well as outdoor short-range applications and within the framework of the IEEE 802.15.7m standardization together with VLC. This paper introduces the underlying principles of OCC and gives a comprehensive overview of this emerging technology with recent standardization activities in OCC. It also outlines the key technical issues such as mobility, coverage, interference, performance enhancement, etc. Future research directions and open issues are also presented

    PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features

    Full text link
    Eyewear devices, such as augmented reality displays, increasingly integrate eye tracking but the first-person camera required to map a user's gaze to the visual scene can pose a significant threat to user and bystander privacy. We present PrivacEye, a method to detect privacy-sensitive everyday situations and automatically enable and disable the eye tracker's first-person camera using a mechanical shutter. To close the shutter in privacy-sensitive situations, the method uses a deep representation of the first-person video combined with rich features that encode users' eye movements. To open the shutter without visual input, PrivacEye detects changes in users' eye movements alone to gauge changes in the "privacy level" of the current situation. We evaluate our method on a first-person video dataset recorded in daily life situations of 17 participants, annotated by themselves for privacy sensitivity, and show that our method is effective in preserving privacy in this challenging setting.Comment: 10 pages, 6 figures, supplementary materia

    Emerging research directions in computer science : contributions from the young informatics faculty in Karlsruhe

    Get PDF
    In order to build better human-friendly human-computer interfaces, such interfaces need to be enabled with capabilities to perceive the user, his location, identity, activities and in particular his interaction with others and the machine. Only with these perception capabilities can smart systems ( for example human-friendly robots or smart environments) become posssible. In my research I\u27m thus focusing on the development of novel techniques for the visual perception of humans and their activities, in order to facilitate perceptive multimodal interfaces, humanoid robots and smart environments. My work includes research on person tracking, person identication, recognition of pointing gestures, estimation of head orientation and focus of attention, as well as audio-visual scene and activity analysis. Application areas are humanfriendly humanoid robots, smart environments, content-based image and video analysis, as well as safety- and security-related applications. This article gives a brief overview of my ongoing research activities in these areas

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    Error Prevention in Sensors and Sensor Systems

    Get PDF
    Achievements in all fields of engineering and fabrication methods have led towards optimization and integration of multiple sensing devices into a concise system. These advances have caused significant innovation in various commercial, industrial, and research efforts. Integrations of subsystems have important applications for sensor systems in particular. The need for reporting and real time awareness of a device’s condition and surroundings have led to sensor systems being implemented in a wide variety of fields. From environmental sensors for agriculture, to object characterization and biomedical sensing, the application for sensor systems has impacted all modern facets of innovation. With these innovations, however, additional sources of errors can occur, that can cause new but exciting challenges for such integrated devices. Such challenges range from error correction and accuracy to power optimization. Researchers have invested significant time and effort to improve the applicability and accuracy of sensors and sensor systems. Efforts to reduce inherent and external noise of sensors can range from hardware to software solutions, focusing on signal processing and exploiting the integration of multiple signals and/or sensor types. My research work throughout my career has been focused on deployable and integrated sensor systems. Their integration not only in hardware and components but also in software, machine learning, pattern recognition, and overall signal processing algorithms to aid in error correction and noise tailoring in all their hardware and software components

    Indoor 3D localization with low-cost LiFi components

    Get PDF
    Indoor positioning or localization is an enabling technology expected to have a profound impact on mobile applications. Various modalities of radio frequency, ultrasound, and light can be used for localization; in this paper we consider how visible light positioning can be realized for 3D positioning as a service comprised of optical sources as part of an overarching lighting infrastructure. Our approach, called Ray-Surface Positioning, uses one or more overhead luminaires, modulated as LiFi, and is used in conjunction with a steerable laser to realize position estimates in three dimensions. In this paper, we build and demonstrate Ray-Surface Positioning using low-cost commodity components in a test apparatus representing one quadrant of a 4m × 4m × 1m volume. Data are collected at regular intervals in the test volume representing 3D position estimates and is validated using a motion capture system. For the low-cost components used, results show position estimate errors of less than 30cm for 95% of the test volume. These results, generated with commodity components, show the potential for 3D positioning in the general case. When the plane of the receiver is known a priori, the position estimate error diminishes to the resolution of the steering mechanism.Accepted manuscrip

    Detecting and Tracking Vulnerable Road Users\u27 Trajectories Using Different Types of Sensors Fusion

    Get PDF
    Vulnerable road user (VRU) detection and tracking has been a key challenge in transportation research. Different types of sensors such as the camera, LiDAR, and inertial measurement units (IMUs) have been used for this purpose. For detection and tracking with the camera, it is necessary to perform calibration to obtain correct GPS trajectories. This method is often tedious and necessitates accurate ground truth data. Moreover, if the camera performs any pan-tilt-zoom function, it is usually necessary to recalibrate the camera. In this thesis, we propose camera calibration using an auxiliary sensor: ultra-wideband (UWB). USBs are capable of tracking a road user with ten-centimeter-level accuracy. Once a VRU with a UWB traverses in the camera view, the UWB GPS data is fused with the camera to perform real-time calibration. As the experimental results in this thesis have shown, the camera is able to output better trajectories after calibration. It is expected that the use of UWB is needed only once to fuse the data and determine the correct trajectories at the same intersection and location of the camera. All other trajectories collected by the camera can be corrected using the same adjustment. In addition, data analysis was conducted to evaluate the performance of the UWB sensors. This study also predicted pedestrian trajectories using data fused by the UWB and smartphone sensors. UWB GPS coordinates are very accurate although it lacks other sensor parameters such as accelerometer, gyroscope, etc. The smartphone data have been used in this scenario to augment the UWB data. The two datasets were merged on the basis of the closest timestamp. The resulting dataset has precise latitude and longitude from UWB as well as the accelerometer, gyroscope, and speed data from smartphones making the fused dataset accurate and rich in terms of parameters. The fused dataset was then used to predict the GPS coordinates of pedestrians and scooters using LSTM
    • …
    corecore