5 research outputs found

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    Condensing a priori data for recognition based augmented reality

    Full text link
    My research proposes novel methods to reduce the cardinality of a priori data used in recognition based augmented reality, whilst retaining distinctive and persistent features in the sets. This research will help reduce latency and increase accuracy in recognition based pose estimation systems, thus improving the user experience for augmented reality applications

    Robotic 3D Reconstruction Utilising Structure from Motion

    Get PDF
    Sensing the real-world is a well-established and continual problem in the field of robotics. Investigations into autonomous aerial and underwater vehicles have extended this challenge into sensing, mapping and localising in three dimensions. This thesis seeks to understand and tackle the challenges of recovering 3D information from an environment using vision alone. There is a well-established literature on the principles of doing this, and some impressive demonstrations; but this thesis explores the practicality of doing vision-based 3D reconstruction using multiple, mobile robotic platforms, the emphasis being on producing accurate 3D models. Typically, robotic platforms such as UAVs have a single on-board camera, restricting which method of visual 3D recovery can be employed. This thesis specifically explores Structure from Motion, a monocular 3D reconstruction technique which produces detailed and accurate, although slow to calculate, 3D reconstructions. It examines how well proof-of-concept demonstrations translate onto the kinds of robotic systems that are commonly deployed in the real world, where local processing is limited and network links have restricted capacity. In order to produce accurate 3D models, it is necessary to use high-resolution imagery, and the difficulties of working with this on remote robotic platforms is explored in some detail

    Object Tracking Method Using PTAMM and Estimated Foreground Regions

    No full text
    corecore