364 research outputs found

    Image Deblurring for Navigation Systems of Vision Impaired People Using Sensor Fusion Data

    Get PDF
    Image deblurring is a key component in vision based indoor/outdoor navigation systems; as blurring is one of the main causes of poor image quality. When images with poor quality are used for analysis, navigation errors are likely to be generated. For navigation systems, camera movement mainly causes blurring, as the camera is continuously moving by the body movement. This paper proposes a deblurring methodology that takes advantage of the fact that most smartphones are equipped with 3-axis accelerometers and gyroscopes. It uses data of the accelerometer and gyroscope to derive a motion vector calculated from the motion of the smartphone during the image-capturing period. A heuristic method, namely particle swarm optimization, is developed to determine the optimal motion vector, in order to deblur the captured image by reversing the effect of motion. Experimental results indicated that deblurring can be successfully performed using the optimal motion vector and that the deblurred images can be used as a readily approach to object and path identification in vision based navigation systems, especially for blind and vision impaired indoor/outdoor navigation. Also, the performance of proposed method is compared with the commonly used deblurring methods. Better results in term of image quality can be achieved. This experiment aims to identify issues in image quality including low light conditions, low quality images due to movement of the capture device and static and moving obstacles in front of the user in both indoor and outdoor environments. From this information, image-processing techniques to will be identified to assist in object and path edge detection necessary to create a guidance system for those with low vision

    OMap: An assistive solution for identifying and localizing objects in a semi-structured environment

    Get PDF
    A system capable of detection and localization of objects of interest in a semi-structured environment will enhance the quality of life of people who are blind or visually impaired. Towards building such a system, this thesis presents a personalized real-time system called O\u27Map that finds misplaced/moved personal items and localizes them with respect to known landmarks. First, we adopted a participatory design approach to identify users’ need and functionalities of the system. Second, we used the concept from system thinking and design thinking to develop a real-time object recognition engine that was optimized to run on low form factor devices. The object recognition engine finds robust correspondences between the query image and item templates using K-D tree of invariant feature descriptor with two nearest neighbors and ratio test. Quantitative evaluation demonstrates that O\u27Map identifies object of interest with an average F-measure of 0.9650

    An Orientation & Mobility Aid for People with Visual Impairments

    Get PDF
    Orientierung&Mobilität (O&M) umfasst eine Reihe von Techniken für Menschen mit Sehschädigungen, die ihnen helfen, sich im Alltag zurechtzufinden. Dennoch benötigen sie einen umfangreichen und sehr aufwendigen Einzelunterricht mit O&M Lehrern, um diese Techniken in ihre täglichen Abläufe zu integrieren. Während einige dieser Techniken assistive Technologien benutzen, wie zum Beispiel den Blinden-Langstock, Points of Interest Datenbanken oder ein Kompass gestütztes Orientierungssystem, existiert eine unscheinbare Kommunikationslücke zwischen verfügbaren Hilfsmitteln und Navigationssystemen. In den letzten Jahren sind mobile Rechensysteme, insbesondere Smartphones, allgegenwärtig geworden. Dies eröffnet modernen Techniken des maschinellen Sehens die Möglichkeit, den menschlichen Sehsinn bei Problemen im Alltag zu unterstützen, die durch ein nicht barrierefreies Design entstanden sind. Dennoch muss mit besonderer Sorgfalt vorgegangen werden, um dabei nicht mit den speziellen persönlichen Kompetenzen und antrainierten Verhaltensweisen zu kollidieren, oder schlimmstenfalls O&M Techniken sogar zu widersprechen. In dieser Dissertation identifizieren wir eine räumliche und systembedingte Lücke zwischen Orientierungshilfen und Navigationssystemen für Menschen mit Sehschädigung. Die räumliche Lücke existiert hauptsächlich, da assistive Orientierungshilfen, wie zum Beispiel der Blinden-Langstock, nur dabei helfen können, die Umgebung in einem limitierten Bereich wahrzunehmen, während Navigationsinformationen nur sehr weitläufig gehalten sind. Zusätzlich entsteht diese Lücke auch systembedingt zwischen diesen beiden Komponenten — der Blinden-Langstock kennt die Route nicht, während ein Navigationssystem nahegelegene Hindernisse oder O&M Techniken nicht weiter betrachtet. Daher schlagen wir verschiedene Ansätze zum Schließen dieser Lücke vor, um die Verbindung und Kommunikation zwischen Orientierungshilfen und Navigationsinformationen zu verbessern und betrachten das Problem dabei aus beiden Richtungen. Um nützliche relevante Informationen bereitzustellen, identifizieren wir zuerst die bedeutendsten Anforderungen an assistive Systeme und erstellen einige Schlüsselkonzepte, die wir bei unseren Algorithmen und Prototypen beachten. Existierende assistive Systeme zur Orientierung basieren hauptsächlich auf globalen Navigationssatellitensystemen. Wir versuchen, diese zu verbessern, indem wir einen auf Leitlinien basierenden Routing Algorithmus erstellen, der auf individuelle Bedürfnisse anpassbar ist und diese berücksichtigt. Generierte Routen sind zwar unmerklich länger, aber auch viel sicherer, gemäß den in Zusammenarbeit mit O&M Lehrern erstellten objektiven Kriterien. Außerdem verbessern wir die Verfügbarkeit von relevanten georeferenzierten Datenbanken, die für ein derartiges bedarfsgerechtes Routing benötigt werden. Zu diesem Zweck erstellen wir einen maschinellen Lernansatz, mit dem wir Zebrastreifen in Luftbildern erkennen, was auch über Ländergrenzen hinweg funktioniert, und verbessern dabei den Stand der Technik. Um den Nutzen von Mobilitätsassistenz durch maschinelles Sehen zu optimieren, erstellen wir O&M Techniken nachempfundene Ansätze, um die räumliche Wahrnehmung der unmittelbaren Umgebung zu erhöhen. Zuerst betrachten wir dazu die verfügbare Freifläche und informieren auch über mögliche Hindernisse. Weiterhin erstellen wir einen neuartigen Ansatz, um die verfügbaren Leitlinien zu erkennen und genau zu lokalisieren, und erzeugen virtuelle Leitlinien, welche Unterbrechungen überbrücken und bereits frühzeitig Informationen über die nächste Leitlinie bereitstellen. Abschließend verbessern wir die Zugänglichkeit von Fußgängerübergängen, insbesondere Zebrastreifen und Fußgängerampeln, mit einem Deep Learning Ansatz. Um zu analysieren, ob unsere erstellten Ansätze und Algorithmen einen tatsächlichen Mehrwert für Menschen mit Sehschädigung erzeugen, vollziehen wir ein kleines Wizard-of-Oz-Experiment zu unserem bedarfsgerechten Routing — mit einem sehr ermutigendem Ergebnis. Weiterhin führen wir eine umfangreichere Studie mit verschiedenen Komponenten und dem Fokus auf Fußgängerübergänge durch. Obwohl unsere statistischen Auswertungen nur eine geringfügige Verbesserung aufzeigen, beeinflußt durch technische Probleme mit dem ersten Prototypen und einer zu geringen Eingewöhnungszeit der Probanden an das System, bekommen wir viel versprechende Kommentare von fast allen Studienteilnehmern. Dies zeigt, daß wir bereits einen wichtigen ersten Schritt zum Schließen der identifizierten Lücke geleistet haben und Orientierung&Mobilität für Menschen mit Sehschädigung damit verbessern konnten

    On informing the creation of assistive tools in virtual reality for severely visually disabled individuals

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Virtual Reality (VR) devices have advanced so dramatically in recent years that they are now capable of fully immersing users in experiences tailored to fit a multitude of needs. This emerging technology has far reaching potential, yet is primarily contained to the entertainment or gaming market, with limited considerations made for disabilities and accessibility. Identifying this gap, evaluating these newer VR devices for their suitability as accessibility aids is needed, and clear standards for successful disability VR design need to be defined and promoted to encourage greater inclusively going forward. To achieve this, a series of ophthalmology-informed tests were created and conducted against 24 participants with severe visual impairments. These tests were used as comparative benchmarks to determine the level of visual perception impaired users had while wearing a VR device against natural vision. Findings suggest that, under certain conditions, VR devices can greatly enhance visual acuity levels when used as replacements to natural vision or typical vision aids, without any enhancement made to account for visual impairments. Following findings and requirements elicited from participants, a prototype VR accessibility text reader and video player were developed allowing visually disabled persons to customise and configure specialised accessibility features for individualised needs. Qualitative usability testing involving 11 impaired participants alongside interviews fed into a iterative design process for better software refinement and were used to informed the creation of a VR accessibility framework for visual disabilities. User tests reported an overwhelmingly positive response to the tool as a feasible reading and viewing aid, allowing persons who could not engage (or, due to the difficulty, refusing to engage) in the reading and viewing of material to do so. Outcomes highlight that a VR device paired with the tested software would be an effective and affordable alternative to specialist head gear that is often expensive and lacking functionality & adaptability. These findings promote the use and future design of VR devices to be used as accessibility tools and visual aids, and provide a comparative benchmark, device usability guidelines, a design framework for VR accessibility, and the first VR accessibility software for reading and viewing.Beacon Centre for the Blind & University of Wolverhampton

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

    Get PDF
    The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360∘^\circ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones

    Determining the accuracy and repeatability of citizen-derived imagery as a source for Structure-from-Motion photogrammetry

    Get PDF
    Globally, sea levels are rising and continue to rise at an accelerating rate. Developments built near the coast are vulnerable from coastal flooding due to a direct rise in sea level and an increase in storm severity, persistence and frequency. As storm events become more prevalent and powerful they will consequently exacerbate the effects from rising sea levels and increase coastal flooding. It is therefore relevant for coastal managers to build and maintain a comprehensive understanding of the coast to predict what a future heightened sea level might bring. Building understanding at a time when resources are limited due to budget cuts is often difficult requiring cost-effective monitoring approaches. Citizen Science is a rapidly developing research method whereby scientific projects utilise public input at one or more stages of the research process. CS projects can tackle scientific research which often cannot be done by scientists alone due to human, financial, time and spatial constraints. Alongside the benefits afforded to scientific research, CS projects help in building scientific understanding within the public domain. By increasing public understanding of the coastal environment, citizens become more empowered to contribute towards coastal decisions. This project takes on the framework defined by CS by engaging a community group with data collection methods for coastal monitoring. Focus is placed on the Structure-from-Motion (SfM) photogrammetric workflow to build 3D models of the coastal environment using citizens and their personal standalone cameras or inbuilt smartphone cameras. This project aims to assess the accuracy of point clouds derived from citizen-derived imagery of a coastal environment and thus determine its potential as a source of data for coastal practitioners. It also aims to recognise the response from participating members of the public towards the SfM imaging procedure

    Seeing the City Digitally

    Get PDF
    This book explores what's happening to ways of seeing urban spaces in the contemporary moment, when so many of the technologies through which cities are visualised are digital. Cities have always been pictured, in many media and for many different purposes. This edited collection explores how that picturing is changing in an era of digital visual culture. Analogue visual technologies like film cameras were understood as creating some sort of a trace of the real city. Digital visual technologies, in contrast, harvest and process digital data to create images that are constantly refreshed, modified and circulated. Each of the chapters in this volume examines a different example of this processual visuality is reconfiguring the spatial and temporal organisation of urban life
    • …
    corecore