4,006 research outputs found
Personalization in cultural heritage: the road travelled and the one ahead
Over the last 20 years, cultural heritage has been a favored domain for personalization research. For years, researchers have experimented with the cutting edge
technology of the day; now, with the convergence of internet and wireless technology, and the increasing adoption of the Web as a platform for the publication of information, the visitor is able to exploit cultural heritage material before, during and after the visit, having different goals and requirements in each phase. However, cultural heritage sites have a huge amount of information to present, which must be filtered and personalized in order to enable the individual user to easily access it. Personalization of cultural heritage information requires a system that is able to model the user
(e.g., interest, knowledge and other personal characteristics), as well as contextual aspects, select the most appropriate content, and deliver it in the most suitable way. It should be noted that achieving this result is extremely challenging in the case of first-time users, such as tourists who visit a cultural heritage site for the first time (and maybe the only time in their life). In addition, as tourism is a social activity, adapting to the individual is not enough because groups and communities have to be modeled and supported as well, taking into account their mutual interests, previous mutual experience, and requirements. How to model and represent the user(s) and the context of the visit and how to reason with regard to the information that is available are the challenges faced by researchers in personalization of cultural heritage. Notwithstanding the effort invested so far, a definite solution is far from being reached, mainly because new technology and new aspects of personalization are constantly being introduced. This article surveys the research in this area. Starting from the earlier systems, which presented cultural heritage information in kiosks, it summarizes the evolution of personalization techniques in museum web sites, virtual collections and mobile guides, until recent extension of cultural heritage toward the semantic and social web. The paper concludes with current challenges and points out areas where future research is needed
Multi-modal probabilistic indoor localization on a smartphone
The satellite-based Global Positioning System (GPS) provides robust localization on smartphones outdoors. In indoor environments, however, no system is close to achieving a similar level of ubiquity, with existing solutions offering different trade-offs in terms of accuracy, robustness and cost. In this paper, we develop a multi-modal positioning system, targeted at smartphones, which aims to get the best out of each of its constituent modalities. More precisely, we combine Bluetooth low energy (BLE) beacons, round-trip-time (RTT) enabled WiFi access points and the smartphone’s inertial measurement unit (IMU) to provide a cheap robust localization system that, unlike fingerprinting methods, requires no pre-training. To do this, we use a probabilistic algorithm based on a conditional random field (CRF). We show how to incorporate sparse visual information to improve the accuracy of our system, using pose estimation from pre-scanned visual landmarks, to calibrate the system online. Our method achieves an accuracy of around 2 meters on two realistic datasets, outperforming other distance-based localization approaches. We also compare our approach with an ultra-wideband (UWB) system. While we do not match the performance of UWB, our system is cheap, smartphone compatible and provides satisfactory performance for many applications
IONet: Learning to Cure the Curse of Drift in Inertial Odometry
Inertial sensors play a pivotal role in indoor localization, which in turn
lays the foundation for pervasive personal applications. However, low-cost
inertial sensors, as commonly found in smartphones, are plagued by bias and
noise, which leads to unbounded growth in error when accelerations are double
integrated to obtain displacement. Small errors in state estimation propagate
to make odometry virtually unusable in a matter of seconds. We propose to break
the cycle of continuous integration, and instead segment inertial data into
independent windows. The challenge becomes estimating the latent states of each
window, such as velocity and orientation, as these are not directly observable
from sensor data. We demonstrate how to formulate this as an optimization
problem, and show how deep recurrent neural networks can yield highly accurate
trajectories, outperforming state-of-the-art shallow techniques, on a wide
range of tests and attachments. In particular, we demonstrate that IONet can
generalize to estimate odometry for non-periodic motion, such as a shopping
trolley or baby-stroller, an extremely challenging task for existing
techniques.Comment: To appear in AAAI18 (Oral
A Novel Approach To Intelligent Navigation Of A Mobile Robot In A Dynamic And Cluttered Indoor Environment
The need and rationale for improved solutions to indoor robot navigation is increasingly driven by the influx of domestic and industrial mobile robots into the market. This research has developed and implemented a novel navigation technique for a mobile robot operating in a cluttered and dynamic indoor environment. It divides the indoor navigation problem into three distinct but interrelated parts, namely, localization, mapping and path planning. The localization part has been addressed using dead-reckoning (odometry). A least squares numerical approach has been used to calibrate the odometer parameters to minimize the effect of systematic errors on the performance, and an intermittent resetting technique, which employs RFID tags placed at known locations in the indoor environment in conjunction with door-markers, has been developed and implemented to mitigate the errors remaining after the calibration. A mapping technique that employs a laser measurement sensor as the main exteroceptive sensor has been developed and implemented for building a binary occupancy grid map of the environment. A-r-Star pathfinder, a new path planning algorithm that is capable of high performance both in cluttered and sparse environments, has been developed and implemented. Its properties, challenges, and solutions to those challenges have also been highlighted in this research. An incremental version of the A-r-Star has been developed to handle dynamic environments. Simulation experiments highlighting properties and performance of the individual components have been developed and executed using MATLAB. A prototype world has been built using the WebotsTM robotic prototyping and 3-D simulation software. An integrated version of the system comprising the localization, mapping and path planning techniques has been executed in this prototype workspace to produce validation results
Featureless visual processing for SLAM in changing outdoor environments
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features
Layered Path Planning with Human Motion Detection for Autonomous Robots
Reactively planning a path in a dynamic and unstructured environment is a key challenge for mobile robots and autonomous systems. Planning should consider factors including the long-term and short-term prediction, current environmental situation, and human context. In this chapter, we present a novel robotic path-planning method with human activity information in a large-scale three-dimensional (3D) environment. In the learning stage, this method uses human motion detection results and preliminary environmental information to build a long-term context model with a hidden Markov model (HMM) to describe and predict human activities in the environment. In the application stage, when a robot detects humans in the environment, it first uses the long-term context model to generate impedance areas in the environment. Then, the robot searches each area of the environment to find paths between key locations, such as escalators, to generate a Reactive Key Cost Map (RKCM), whose vertexes are those key locations and edges are generated paths. The graphs of all areas are connected using the key nodes in the subgraphs to build a global graph of the whole environment. Finally, the robot can reactively plan a path based on the current environmental situation and predicted human activities. This method enables robots to navigate robustly in a large-scale 3D environment with regular human activities, and it significantly reduces computing workload with proposed RKCM
3D Model-free Visual Localization System from Essential Matrix under Local Planar Motion
Visual localization plays a critical role in the functionality of low-cost
autonomous mobile robots. Current state-of-the-art approaches for achieving
accurate visual localization are 3D scene-specific, requiring additional
computational and storage resources to construct a 3D scene model when facing a
new environment. An alternative approach of directly using a database of 2D
images for visual localization offers more flexibility. However, such methods
currently suffer from limited localization accuracy. In this paper, we propose
an accurate and robust multiple checking-based 3D model-free visual
localization system to address the aforementioned issues. To ensure high
accuracy, our focus is on estimating the pose of a query image relative to the
retrieved database images using 2D-2D feature matches. Theoretically, by
incorporating the local planar motion constraint into both the estimation of
the essential matrix and the triangulation stages, we reduce the minimum
required feature matches for absolute pose estimation, thereby enhancing the
robustness of outlier rejection. Additionally, we introduce a multiple-checking
mechanism to ensure the correctness of the solution throughout the solving
process. For validation, qualitative and quantitative experiments are performed
on both simulation and two real-world datasets and the experimental results
demonstrate a significant enhancement in both accuracy and robustness afforded
by the proposed 3D model-free visual localization system
Recommended from our members
Landscape Study in Wireless and Mobile Learning in the post-16 sector
In the post-16 sector (further and higher education, and adult and community learning) there is a need to understand how wireless and mobile technologies can contribute to improving the student experience of learning, and help institutions fulfil their missions in an age of incomparably fast technological change. In the context of this interest and growing need, a Landscape Study project was commissioned by JISC through the Innovation strand of the JISC e-Learning Programme in 2004-5. Our project aims were to take a birds-eye view of developments and practice in the UK and internationally, and to communicate our findings to a broad and varied audience. The Summary report is accompanied by 3 associated reports on 'Current Uses', 'Potential Uses' and 'Strategic Aspects'. (The four reports are available in one single document here.
Wavelet-based filtration procedure for denoising the predicted CO2 waveforms in smart home within the Internet of Things
The operating cost minimization of smart homes can be achieved with the optimization of the management of the building's technical functions by determination of the current occupancy status of the individual monitored spaces of a smart home. To respect the privacy of the smart home residents, indirect methods (without using cameras and microphones) are possible for occupancy recognition of space in smart homes. This article describes a newly proposed indirect method to increase the accuracy of the occupancy recognition of monitored spaces of smart homes. The proposed procedure uses the prediction of the course of CO2 concentration from operationally measured quantities (temperature indoor and relative humidity indoor) using artificial neural networks with a multilayer perceptron algorithm. The mathematical wavelet transformation method is used for additive noise canceling from the predicted course of the CO2 concentration signal with an objective increase accuracy of the prediction. The calculated accuracy of CO2 concentration waveform prediction in the additive noise-canceling application was higher than 98% in selected experiments.Web of Science203art. no. 62
- …