449 research outputs found

    A tripartite filter design for seamless pedestrian navigation using recursive 2-means clustering and Tukey update

    Get PDF
    Mobile devices are desired to guide users seamlessly to diverse destinations indoors and outdoors. The positioning fixing subsystems often provide poor quality measurements with gaps in an urban environment. No single position fixing technology works continuously. Many sensor fusion variations have been previously trialed to overcome this challenge, including the particle filter that is robust and the Kalman filter which is fast. However, a lack exists, of context aware, seamless systems that are able to use the most fit sensors and methods in the correct context. A novel adaptive and modular tripartite navigation filter design is presented to enable seamless navigation. It consists of a sensor subsystem, a context inference and a navigation filter blocks. A foot-mounted inertial measurement unit (IMU), a Global Navigation Satellite System (GNSS) receiver, Bluetooth Low Energy (BLE) and Ultrawideband (UWB) positioning systems were used in the evaluation implementation of this design. A novel recursive 2-means clustering method was developed to track multiple hypotheses when there are gaps in position fixes. The closest hypothesis to a new position fix is selected when the gap ends. Moreover, when the position fix quality measure is not reliable, a fusion approach using a Tukey-style particle filter measurement update is introduced. Results show the successful operation of the design implementation. The Tukey update improves accuracy by 5% and together with the clustering method the system robustness is enhanced

    Comparison of Infrared and Visible Imagery for Object Tracking: Toward Trackers with Superior IR Performance

    Get PDF
    The subject of this paper is the visual object tracking in infrared (IR) videos. Our contribution is twofold. First, the performance behaviour of the state-of-the-art trackers is investigated via a comparative study using IR-visible band video conjugates, i.e., video pairs captured observing the same scene simultaneously, to identify the IR specific challenges. Second, we propose a novel ensemble based tracking method that is tuned to IR data. The proposed algorithm sequentially constructs and maintains a dynamical ensemble of simple correlators and produces tracking decisions by switching among the ensemble correlators depending on the target appearance in a computationally highly efficient manner We empirically show that our algorithm significantly outperforms the state-of-the-art trackers in our extensive set of experiments with IR imagery

    Object detection, recognition and classification using computer vision and artificial intelligence approaches

    Get PDF
    Object detection and recognition has been used extensively in recent years to solve numerus challenges in different fields. Due to the vital roles they play, object detection and recognition has enabled quantum leaps in many industry fields by helping to overcome some serious challenges and obstacles. For example, worldwide security concerns have drawn the attention and stimulated the use of highly intelligent computer vision technology to provide security in different environments and in diverse terrains. In addition, some wildlife is at present exposed to danger and extinction worldwide. Therefore, early detection and recognition of potential threats to wildlife have become essential and timely. The extent of using computer vision and artificial intelligence to convert the seemingly insecure world to a more secure one has been widely accepted. Such technologies are used in monitoring, tracking, organising, analysing objects in a scene and for a number of other countless purposes. [Continues.

    Statistical Inference for Spatiotemporal Partially Observed Markov Processes via the R Package spatPomp

    Full text link
    We consider inference for a class of nonlinear stochastic processes with latent dynamic variables and spatial structure. The spatial structure takes the form of a finite collection of spatial units that are dynamically coupled. We assume that the latent processes have a Markovian structure and that unit-specific noisy measurements are made. A model of this form is called a spatiotemporal partially observed Markov process (SpatPOMP). The R package spatPomp provides an environment for implementing SpatPOMP models, analyzing data, and developing new inference approaches. We describe the spatPomp implementations of some methods with scaling properties suited to SpatPOMP models. We demonstrate the package on a simple Gaussian system and on a nontrivial epidemiological model for measles transmission within and between cities. We show how to construct user-specified SpatPOMP models within spatPomp

    Simulation-based Inference for Partially Observed Markov Process Models with Spatial Coupling

    Full text link
    Statistical inference for nonlinear and non-Gaussian dynamic models of moderate and high dimensions is an open research area. Such models may require simulation-based methodology when linearization and Gaussian approximations are not appropriate. The particle filter has allowed likelihood-based inference for such problems when the dimension of the problem is small, but degrades in performance as the dimension of the model increases. This is due to the exponential growth in the volumes to be represented by Monte Carlo simulations as dimension grows. In epidemiology, this curse of dimensionality problem occurs when we jointly model the epidemiological dynamics in a group of neighboring towns that are coupled via immigration or travel. In this dissertation, I present two innovations that make methodological and practical progress in data analysis for nonlinear and non-Gaussian dynamic models with coupled disease models as the primary problem of interest. All work was done jointly with my co-advisers Dr. Ionides and Dr. King as well as Dr. Joonha Park and Allister Ho. The first innovation is a group of simulation-based methods that take advantage of localization — the idea that dependence between far enough spatial units in a spatially coupled model is negligible — to make approximations enabling scalable likelihood estimation. I show theoretical results for the methods and examples of its use on three different models, including a coupled measles model in England. The second innovation is the open-source R package spatPomp. This package builds on the strengths of the pomp package for model development and testing while adding new components that allow the implementation of new methods that are tailored for moderate- and high-dimensional problems. Various algorithms and utility functions are implemented and the package is available on the Comprehensive R Archive Network (CRAN) repository of packages.PHDStatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169706/1/kasfaw_1.pd

    A Novel Approach to Complex Human Activity Recognition

    Get PDF
    Human activity recognition is a technology that offers automatic recognition of what a person is doing with respect to body motion and function. The main goal is to recognize a person\u27s activity using different technologies such as cameras, motion sensors, location sensors, and time. Human activity recognition is important in many areas such as pervasive computing, artificial intelligence, human-computer interaction, health care, health outcomes, rehabilitation engineering, occupational science, and social sciences. There are numerous ubiquitous and pervasive computing systems where users\u27 activities play an important role. The human activity carries a lot of information about the context and helps systems to achieve context-awareness. In the rehabilitation area, it helps with functional diagnosis and assessing health outcomes. Human activity recognition is an important indicator of participation, quality of life and lifestyle. There are two classes of human activities based on body motion and function. The first class, simple human activity, involves human body motion and posture, such as walking, running, and sitting. The second class, complex human activity, includes function along with simple human activity, such as cooking, reading, and watching TV. Human activity recognition is an interdisciplinary research area that has been active for more than a decade. Substantial research has been conducted to recognize human activities, but, there are many major issues still need to be addressed. Addressing these issues would provide a significant improvement in different aspects of the applications of the human activity recognition in different areas. There has been considerable research conducted on simple human activity recognition, whereas, a little research has been carried out on complex human activity recognition. However, there are many key aspects (recognition accuracy, computational cost, energy consumption, mobility) that need to be addressed in both areas to improve their viability. This dissertation aims to address the key aspects in both areas of human activity recognition and eventually focuses on recognition of complex activity. It also addresses indoor and outdoor localization, an important parameter along with time in complex activity recognition. This work studies accelerometer sensor data to recognize simple human activity and time, location and simple activity to recognize complex activity

    Device-free indoor localisation with non-wireless sensing techniques : a thesis by publications presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Electronics and Computer Engineering, Massey University, Albany, New Zealand

    Get PDF
    Global Navigation Satellite Systems provide accurate and reliable outdoor positioning to support a large number of applications across many sectors. Unfortunately, such systems do not operate reliably inside buildings due to the signal degradation caused by the absence of a clear line of sight with the satellites. The past two decades have therefore seen intensive research into the development of Indoor Positioning System (IPS). While considerable progress has been made in the indoor localisation discipline, there is still no widely adopted solution. The proliferation of Internet of Things (IoT) devices within the modern built environment provides an opportunity to localise human subjects by utilising such ubiquitous networked devices. This thesis presents the development, implementation and evaluation of several passive indoor positioning systems using ambient Visible Light Positioning (VLP), capacitive-flooring, and thermopile sensors (low-resolution thermal cameras). These systems position the human subject in a device-free manner (i.e., the subject is not required to be instrumented). The developed systems improve upon the state-of-the-art solutions by offering superior position accuracy whilst also using more robust and generalised test setups. The developed passive VLP system is one of the first reported solutions making use of ambient light to position a moving human subject. The capacitive-floor based system improves upon the accuracy of existing flooring solutions as well as demonstrates the potential for automated fall detection. The system also requires very little calibration, i.e., variations of the environment or subject have very little impact upon it. The thermopile positioning system is also shown to be robust to changes in the environment and subjects. Improvements are made over the current literature by testing across multiple environments and subjects whilst using a robust ground truth system. Finally, advanced machine learning methods were implemented and benchmarked against a thermopile dataset which has been made available for other researchers to use

    A Marine Growth Detection System for Underwater Gliders

    Get PDF
    Marine growth has been observed to cause a drop in the horizontal and vertical velocities of underwater gliders, thus making them unresponsive and needing immediate recovery. Currently, no strategies exist to correctly identify the onset of marine growth for gliders and only limited data sets of biofouled hulls exist. Here, a field test has been conducted to first investigate the impact of marine growth on the dynamics and power consumption of underwater gliders and then design an anomaly detection system for high levels of biofouling. A Slocum glider was deployed first for eight days with drag stimulators to imitate severe biofouling; then, the vehicle was redeployed with no additions to the hull for further 20 days. The mimicked biofouling caused a speed reduction due to a significant increase in drag. Additionally, the lower speed causes the steady-state flight stage to last longer and the rudder to become less responsive; hence, marine growth results in a shortening of deployment duration through an increase in power consumption. As actual biofouling due to p. pollicipes occurred during the second deployment, it is possible to develop and test a system that successfully detects and identifies high levels of marine growth on the glider, blending model- and data-based solutions using steady-state flight data. The system will greatly help pilots replan missions to safely recover the vehicle if significant biofouling is detected

    Robust and real-time hand detection and tracking in monocular video

    Get PDF
    In recent years, personal computing devices such as laptops, tablets and smartphones have become ubiquitous. Moreover, intelligent sensors are being integrated into many consumer devices such as eyeglasses, wristwatches and smart televisions. With the advent of touchscreen technology, a new human-computer interaction (HCI) paradigm arose that allows users to interface with their device in an intuitive manner. Using simple gestures, such as swipe or pinch movements, a touchscreen can be used to directly interact with a virtual environment. Nevertheless, touchscreens still form a physical barrier between the virtual interface and the real world. An increasingly popular field of research that tries to overcome this limitation, is video based gesture recognition, hand detection and hand tracking. Gesture based interaction allows the user to directly interact with the computer in a natural manner by exploring a virtual reality using nothing but his own body language. In this dissertation, we investigate how robust hand detection and tracking can be accomplished under real-time constraints. In the context of human-computer interaction, real-time is defined as both low latency and low complexity, such that a complete video frame can be processed before the next one becomes available. Furthermore, for practical applications, the algorithms should be robust to illumination changes, camera motion, and cluttered backgrounds in the scene. Finally, the system should be able to initialize automatically, and to detect and recover from tracking failure. We study a wide variety of existing algorithms, and propose significant improvements and novel methods to build a complete detection and tracking system that meets these requirements. Hand detection, hand tracking and hand segmentation are related yet technically different challenges. Whereas detection deals with finding an object in a static image, tracking considers temporal information and is used to track the position of an object over time, throughout a video sequence. Hand segmentation is the task of estimating the hand contour, thereby separating the object from its background. Detection of hands in individual video frames allows us to automatically initialize our tracking algorithm, and to detect and recover from tracking failure. Human hands are highly articulated objects, consisting of finger parts that are connected with joints. As a result, the appearance of a hand can vary greatly, depending on the assumed hand pose. Traditional detection algorithms often assume that the appearance of the object of interest can be described using a rigid model and therefore can not be used to robustly detect human hands. Therefore, we developed an algorithm that detects hands by exploiting their articulated nature. Instead of resorting to a template based approach, we probabilistically model the spatial relations between different hand parts, and the centroid of the hand. Detecting hand parts, such as fingertips, is much easier than detecting a complete hand. Based on our model of the spatial configuration of hand parts, the detected parts can be used to obtain an estimate of the complete hand's position. To comply with the real-time constraints, we developed techniques to speed-up the process by efficiently discarding unimportant information in the image. Experimental results show that our method is competitive with the state-of-the-art in object detection while providing a reduction in computational complexity with a factor 1 000. Furthermore, we showed that our algorithm can also be used to detect other articulated objects such as persons or animals and is therefore not restricted to the task of hand detection. Once a hand has been detected, a tracking algorithm can be used to continuously track its position in time. We developed a probabilistic tracking method that can cope with uncertainty caused by image noise, incorrect detections, changing illumination, and camera motion. Furthermore, our tracking system automatically determines the number of hands in the scene, and can cope with hands entering or leaving the video canvas. We introduced several novel techniques that greatly increase tracking robustness, and that can also be applied in other domains than hand tracking. To achieve real-time processing, we investigated several techniques to reduce the search space of the problem, and deliberately employ methods that are easily parallelized on modern hardware. Experimental results indicate that our methods outperform the state-of-the-art in hand tracking, while providing a much lower computational complexity. One of the methods used by our probabilistic tracking algorithm, is optical flow estimation. Optical flow is defined as a 2D vector field describing the apparent velocities of objects in a 3D scene, projected onto the image plane. Optical flow is known to be used by many insects and birds to visually track objects and to estimate their ego-motion. However, most optical flow estimation methods described in literature are either too slow to be used in real-time applications, or are not robust to illumination changes and fast motion. We therefore developed an optical flow algorithm that can cope with large displacements, and that is illumination independent. Furthermore, we introduce a regularization technique that ensures a smooth flow-field. This regularization scheme effectively reduces the number of noisy and incorrect flow-vector estimates, while maintaining the ability to handle motion discontinuities caused by object boundaries in the scene. The above methods are combined into a hand tracking framework which can be used for interactive applications in unconstrained environments. To demonstrate the possibilities of gesture based human-computer interaction, we developed a new type of computer display. This display is completely transparent, allowing multiple users to perform collaborative tasks while maintaining eye contact. Furthermore, our display produces an image that seems to float in thin air, such that users can touch the virtual image with their hands. This floating imaging display has been showcased on several national and international events and tradeshows. The research that is described in this dissertation has been evaluated thoroughly by comparing detection and tracking results with those obtained by state-of-the-art algorithms. These comparisons show that the proposed methods outperform most algorithms in terms of accuracy, while achieving a much lower computational complexity, resulting in a real-time implementation. Results are discussed in depth at the end of each chapter. This research further resulted in an international journal publication; a second journal paper that has been submitted and is under review at the time of writing this dissertation; nine international conference publications; a national conference publication; a commercial license agreement concerning the research results; two hardware prototypes of a new type of computer display; and a software demonstrator
    • …
    corecore