161 research outputs found

    GNSS trajectory anomaly detection using similarity comparison methods for pedestrian navigation

    Get PDF
    The urban setting is a challenging environment for GNSS receivers. Multipath and other anomalies typically increase the positioning error of the receiver. Moreover, the error estimate of the position is often unreliable. In this study, we detect GNSS trajectory anomalies by using similarity comparison methods between a pedestrian dead reckoning trajectory, recorded using a foot-mounted inertial measurement unit, and the corresponding GNSS trajectory. During a normal walk, the foot-mounted inertial dead reckoning setup is trustworthy up to a few tens of meters. Thus, the differing GNSS trajectory can be detected using form similarity comparison methods. Of the eight tested methods, the Hausdorff distance (HD) and the accumulated distance difference (ADD) give slightly more consistent detection results compared to the rest

    Making tourist guidance systems more intelligent, adaptive and personalised using crowd sourced movement data

    Get PDF
    Ambient intelligence (AmI) provides adaptive, personalized, intelligent, ubiquitous and interactive services to wide range of users. AmI can have a variety of applications, including smart shops, health care, smart home, assisted living, and location-based services. Tourist guidance is one of the applications where AmI can have a great contribution to the quality of the service, as the tourists, who may not be very familiar with the visiting site, need a location-aware, ubiquitous, personalised and informative service. Such services should be able to understand the preferences of the users without requiring the users to specify them, predict their interests, and provide relevant and tailored services in the most appropriate way, including audio, visual, and haptic. This paper shows the use of crowd sourced trajectory data in the detection of points of interests and providing ambient tourist guidance based on the patterns recognised over such data

    Using crowdsourced trajectories for automated OSM data entry approach

    Get PDF
    The concept of crowdsourcing is nowadays extensively used to refer to the collection of data and the generation of information by large groups of users/contributors. OpenStreetMap (OSM) is a very successful example of a crowd-sourced geospatial data project. Unfortunately, it is often the case that OSM contributor inputs (including geometry and attribute data inserts, deletions and updates) have been found to be inaccurate, incomplete, inconsistent or vague. This is due to several reasons which include: (1) many contributors with little experience or training in mapping and Geographic Information Systems (GIS); (2) not enough contributors familiar with the areas being mapped; (3) contributors having different interpretations of the attributes (tags) for specific features; (4) different levels of enthusiasm between mappers resulting in different number of tags for similar features and (5) the user-friendliness of the online user-interface where the underlying map can be viewed and edited. This paper suggests an automatic mechanism, which uses raw spatial data (trajectories of movements contributed by contributors to OSM) to minimise the uncertainty and impact of the above-mentioned issues. This approach takes the raw trajectory datasets as input and analyses them using data mining techniques. In addition, we extract some patterns and rules about the geometry and attributes of the recognised features for the purpose of insertion or editing of features in the OSM database. The underlying idea is that certain characteristics of user trajectories are directly linked to the geometry and the attributes of geographic features. Using these rules successfully results in the generation of new features with higher spatial quality which are subsequently automatically inserted into the OSM database

    Smartphone-based vehicle telematics: a ten-year anniversary

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this recordJust as it has irrevocably reshaped social life, the fast growth of smartphone ownership is now beginning to revolutionize the driving experience and change how we think about automotive insurance, vehicle safety systems, and traffic research. This paper summarizes the first ten years of research in smartphone-based vehicle telematics, with a focus on user-friendly implementations and the challenges that arise due to the mobility of the smartphone. Notable academic and industrial projects are reviewed, and system aspects related to sensors, energy consumption, and human-machine interfaces are examined. Moreover, we highlight the differences between traditional and smartphone-based automotive navigation, and survey the state of the art in smartphone-based transportation mode classification, vehicular ad hoc networks, cloud computing, driver classification, and road condition monitoring. Future advances are expected to be driven by improvements in sensor technology, evidence of the societal benefits of current implementations, and the establishment of industry standards for sensor fusion and driver assessment

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation

    Get PDF
    Positioning in navigation systems is predominantly performed by Global Navigation Satellite Systems (GNSSs). However, while GNSS-enabled devices have become commonplace for outdoor navigation, their use for indoor navigation is hindered due to GNSS signal degradation or blockage. For this, development of alternative positioning approaches and techniques for navigation systems is an ongoing research topic. In this dissertation, I present a new approach and address three major navigational problems: indoor positioning, obstacle detection, and keyframe detection. The proposed approach utilizes inertial and visual sensors available on smartphones and are focused on developing: a framework for monocular visual internal odometry (VIO) to position human/object using sensor fusion and deep learning in tandem; an unsupervised algorithm to detect obstacles using sequence of visual data; and a supervised context-aware keyframe detection. The underlying technique for monocular VIO is a recurrent convolutional neural network for computing six-degree-of-freedom (6DoF) in an end-to-end fashion and an extended Kalman filter module for fine-tuning the scale parameter based on inertial observations and managing errors. I compare the results of my featureless technique with the results of conventional feature-based VIO techniques and manually-scaled results. The comparison results show that while the framework is more effective compared to featureless method and that the accuracy is improved, the accuracy of feature-based method still outperforms the proposed approach. The approach for obstacle detection is based on processing two consecutive images to detect obstacles. Conducting experiments and comparing the results of my approach with the results of two other widely used algorithms show that my algorithm performs better; 82% precision compared with 69%. In order to determine the decent frame-rate extraction from video stream, I analyzed movement patterns of camera and inferred the context of the user to generate a model associating movement anomaly with proper frames-rate extraction. The output of this model was utilized for determining the rate of keyframe extraction in visual odometry (VO). I defined and computed the effective frames for VO and experimented with and used this approach for context-aware keyframe detection. The results show that the number of frames, using inertial data to infer the decent frames, is decreased

    Heading drift mitigation for low-cost inertial pedestrian navigation

    Get PDF
    The concept of autonomous pedestrian navigation is often adopted for indoor pedestrian navigation. For outdoors, a Global Positioning System (GPS) is often used for navigation by utilizing GPS signals for position computation but indoors, its signals are often unavailable. Therefore, autonomous pedestrian navigation for indoors can be realized with the use of independent sensors, such as low-cost inertial sensors, and these sensors are often known as Inertial Measurement Unit (IMU) where they do not rely on the reception of external information such as GPS signals. Using these sensors, a relative positioning concept from initialized position and attitude is used for navigation. The sensors sense the change in velocity and after integration, it is added to the previous position to obtain the current position. Such low-cost systems, however, are prone to errors that can result in a large position drift. This problem can be minimized by mounting the sensors on the pedestrian’s foot. During walking, the foot is briefly stationary while it is on the ground, sometimes called the zero-velocity period. If a non-zero velocity is then measured by the inertial sensors during this period, it is considered as an error and thus can be corrected. These repeated corrections to the inertial sensor’s velocity measurements can, therefore, be used to control the error growth and minimize the position drift. Nonetheless, it is still inadequate, mainly due to the remaining errors on the inertial sensor’s heading when the velocity corrections are used alone. Apart from the initialization issue, therefore, the heading drift problem still remains in such low-cost systems. In this research, two novel methods are developed and investigated to mitigate the heading drift problem when used with the velocity updates. The first method is termed Cardinal Heading Aided Inertial Navigation (CHAIN), where an algorithm is developed to use building ‘heading’ to aid the heading measurement in the Kalman Filter. The second method is termed the Rotated IMU (RIMU), where the foot-mounted inertial sensor is rotated about a single axis to increase the observability of the sensor’s heading. For the CHAIN, the method proposed has been investigated with real field trials using the low-cost Microstrain 3DM-GX3-25 inertial sensor. It shows a clear improvement in mitigating the heading drift error. It offers significant improvement in navigation accuracy for a long period, allowing autonomous pedestrian navigation for as long as 40 minutes with below 5 meters position error between start and end position. It does not require any extra heading sensors, such as a magnetometer or visual sensors such as a camera nor an extensive position or map database, and thus offers a cost-effective solution. Furthermore, its simplicity makes it feasible for it to be implemented in real-time, as very little computing capability is needed. For the RIMU, the method was tested with Nottingham Geospatial Institute (NGI) inertial data simulation software. Field trials were also undertaken using the same low-cost inertial sensor, mounted on a rotated platform prototype. This method improves the observability of the inertial sensor’s errors, resulting also in a decrease in the heading drift error at the expense of requiring extra components

    Robust state estimation methods for robotics applications

    Get PDF
    State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem

    Automatic human behaviour anomaly detection in surveillance video

    Get PDF
    This thesis work focusses upon developing the capability to automatically evaluate and detect anomalies in human behaviour from surveillance video. We work with static monocular cameras in crowded urban surveillance scenarios, particularly air- ports and commercial shopping areas. Typically a person is 100 to 200 pixels high in a scene ranging from 10 - 20 meters width and depth, populated by 5 to 40 peo- ple at any given time. Our procedure evaluates human behaviour unobtrusively to determine outlying behavioural events, agging abnormal events to the operator. In order to achieve automatic human behaviour anomaly detection we address the challenge of interpreting behaviour within the context of the social and physical environment. We develop and evaluate a process for measuring social connectivity between individuals in a scene using motion and visual attention features. To do this we use mutual information and Euclidean distance to build a social similarity matrix which encodes the social connection strength between any two individuals. We de- velop a second contextual basis which acts by segmenting a surveillance environment into behaviourally homogeneous subregions which represent high tra c slow regions and queuing areas. We model the heterogeneous scene in homogeneous subgroups using both contextual elements. We bring the social contextual information, the scene context, the motion, and visual attention features together to demonstrate a novel human behaviour anomaly detection process which nds outlier behaviour from a short sequence of video. The method, Nearest Neighbour Ranked Outlier Clusters (NN-RCO), is based upon modelling behaviour as a time independent se- quence of behaviour events, can be trained in advance or set upon a single sequence. We nd that in a crowded scene the application of Mutual Information-based social context permits the ability to prevent self-justifying groups and propagate anomalies in a social network, granting a greater anomaly detection capability. Scene context uniformly improves the detection of anomalies in all the datasets we test upon. We additionally demonstrate that our work is applicable to other data domains. We demonstrate upon the Automatic Identi cation Signal data in the maritime domain. Our work is capable of identifying abnormal shipping behaviour using joint motion dependency as analogous for social connectivity, and similarly segmenting the shipping environment into homogeneous regions
    • …
    corecore