145 research outputs found

    COVID-19 and Visual Disability: Can’t Look and Now Don’t Touch

    Get PDF
    Article provides a scientific explanation for pandemic-related challenges blind and visually impaired (BVI) people experience. These challenges include spatial cognition, nonvisual information access, and environmental perception. Also offers promising technical solutions for the above challenges

    Understanding the Impact of Image Quality and Distance of Objects to Object Detection Performance

    Full text link
    Deep learning has made great strides for object detection in images. The detection accuracy and computational cost of object detection depend on the spatial resolution of an image, which may be constrained by both the camera and storage considerations. Compression is often achieved by reducing either spatial or amplitude resolution or, at times, both, both of which have well-known effects on performance. Detection accuracy also depends on the distance of the object of interest from the camera. Our work examines the impact of spatial and amplitude resolution, as well as object distance, on object detection accuracy and computational cost. We develop a resolution-adaptive variant of YOLOv5 (RA-YOLO), which varies the number of scales in the feature pyramid and detection head based on the spatial resolution of the input image. To train and evaluate this new method, we created a dataset of images with diverse spatial and amplitude resolutions by combining images from the TJU and Eurocity datasets and generating different resolutions by applying spatial resizing and compression. We first show that RA-YOLO achieves a good trade-off between detection accuracy and inference time over a large range of spatial resolutions. We then evaluate the impact of spatial and amplitude resolutions on object detection accuracy using the proposed RA-YOLO model. We demonstrate that the optimal spatial resolution that leads to the highest detection accuracy depends on the 'tolerated' image size. We further assess the impact of the distance of an object to the camera on the detection accuracy and show that higher spatial resolution enables a greater detection range. These results provide important guidelines for choosing the image spatial resolution and compression settings predicated on available bandwidth, storage, desired inference time, and/or desired detection range, in practical applications

    VisPercep: A Vision-Language Approach to Enhance Visual Perception for People with Blindness and Low Vision

    Full text link
    People with blindness and low vision (pBLV) encounter substantial challenges when it comes to comprehensive scene recognition and precise object identification in unfamiliar environments. Additionally, due to the vision loss, pBLV have difficulty in accessing and identifying potential tripping hazards on their own. In this paper, we present a pioneering approach that leverages a large vision-language model to enhance visual perception for pBLV, offering detailed and comprehensive descriptions of the surrounding environments and providing warnings about the potential risks. Our method begins by leveraging a large image tagging model (i.e., Recognize Anything (RAM)) to identify all common objects present in the captured images. The recognition results and user query are then integrated into a prompt, tailored specifically for pBLV using prompt engineering. By combining the prompt and input image, a large vision-language model (i.e., InstructBLIP) generates detailed and comprehensive descriptions of the environment and identifies potential risks in the environment by analyzing the environmental objects and scenes, relevant to the prompt. We evaluate our approach through experiments conducted on both indoor and outdoor datasets. Our results demonstrate that our method is able to recognize objects accurately and provide insightful descriptions and analysis of the environment for pBLV

    UNav: An Infrastructure-Independent Vision-Based Navigation System for People with Blindness and Low vision

    Full text link
    Vision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a novel vision-based localization pipeline for a specific use case: navigation support for end-users with blindness and low vision. Given a query image taken by an end-user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database of the target space. The geolocations of these similar images are utilized in downstream tasks that employ a weighted-average method to estimate the end-user's location and a perspective-n-point (PnP) algorithm to estimate the end-user's direction. Additionally, this system implements Dijkstra's algorithm to calculate a shortest path based on a navigable map that includes trip origin and destination. The topometric map used for localization and navigation is built using a customized graphical user interface that projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan. Sequential images used for map construction can be collected in a pre-mapping step or scavenged through public databases/citizen science. The end-to-end system can be installed on any internet-accessible device with a camera that hosts a custom mobile application. For evaluation purposes, mapping and localization were tested in a complex hospital environment. The evaluation results demonstrate that our system can achieve localization with an average error of less than 1 meter without knowledge of the camera's intrinsic parameters, such as focal length

    The Intersection between Ocular and Manual Motor Control: Eye–Hand Coordination in Acquired Brain Injury

    Get PDF
    Acute and chronic disease processes that lead to cerebral injury can often be clinically challenging diagnostically, prognostically, and therapeutically. Neurodegenerative processes are one such elusive diagnostic group, given their often diffuse and indolent nature, creating difficulties in pinpointing specific structural abnormalities that relate to functional limitations. A number of studies in recent years have focused on eye–hand coordination (EHC) in the setting of acquired brain injury (ABI), highlighting the important set of interconnected functions of the eye and hand and their relevance in neurological conditions. These experiments, which have concentrated on focal lesion-based models, have significantly improved our understanding of neurophysiology and underscored the sensitivity of biomarkers in acute and chronic neurological disease processes, especially when such biomarkers are combined synergistically. To better understand EHC and its connection with ABI, there is a need to clarify its definition and to delineate its neuroanatomical and computational underpinnings. Successful EHC relies on the complex feedback- and prediction-mediated relationship between the visual, ocular motor, and manual motor systems and takes advantage of finely orchestrated synergies between these systems in both the spatial and temporal domains. Interactions of this type are representative of functional sensorimotor control, and their disruption constitutes one of the most frequent deficits secondary to brain injury. The present review describes the visually mediated planning and control of eye movements, hand movements, and their coordination, with a particular focus on deficits that occur following neurovascular, neurotraumatic, and neurodegenerative conditions. Following this review, we also discuss potential future research directions, highlighting objective EHC as a sensitive biomarker complement within acute and chronic neurological disease processes

    Art therapy for Parkinson's disease.

    Get PDF
    Abstract Objective To explore the potential rehabilitative effect of art therapy and its underlying mechanisms in Parkinson's disease (PD). Methods Observational study of eighteen patients with PD, followed in a prospective, open-label, exploratory trial. Before and after twenty sessions of art therapy, PD patients were assessed with the UPDRS, Pegboard Test, Timed Up and Go Test (TUG), Beck Depression Inventory (BDI), Modified Fatigue Impact Scale and PROMIS-Self-Efficacy, Montreal Cognitive Assessment, Rey-Osterrieth Complex Figure Test (RCFT), Benton Visual Recognition Test (BVRT), Navon Test, Visual Search, and Stop Signal Task. Eye movements were recorded during the BVRT. Resting-state functional MRI (rs-fMRI) was also performed to assess functional connectivity (FC) changes within the dorsal attention (DAN), executive control (ECN), fronto-occipital (FOC), salience (SAL), primary and secondary visual (V1, V2) brain networks. We also tested fourteen age-matched healthy controls at baseline. Results At baseline, PD patients showed abnormal visual-cognitive functions and eye movements. Analyses of rs-fMRI showed increased functional connectivity within DAN and ECN in patients compared to controls. Following art therapy, performance improved on Navon test, eye tracking, and UPDRS scores. Rs-fMRI analysis revealed significantly increased FC levels in brain regions within V1 and V2 networks. Interpretation Art therapy improves overall visual-cognitive skills and visual exploration strategies as well as general motor function in patients with PD. The changes in brain connectivity highlight a functional reorganization of visual networks

    Art Therapy as a Comprehensive Complementary Treatment for Parkinson’s Disease

    Get PDF
    Introduction: Parkinson’s disease (PD) is the second most prevalent neurodegenerative disease. Complementary and alternative therapies are increasingly utilized to address its complex multisystem symptomatology. Art therapy involves motoric action and visuospatial processing while promoting broad biopsychosocial wellness. The process involves hedonic absorption, which provides an escape from otherwise persistent and cumulative PD symptoms, refreshing internal resources. It involves the expression in nonverbal form of multilayered psychological and somatic phenomena; once these are externalized in a symbolic arts medium, they can be explored, understood, integrated, and reorganized through verbal dialogue, effecting relief and positive change. Methods: 42 participants with mild to moderate PD were treated with 20 sessions of group art therapy. They were assessed before and after therapy with a novel arts-based instrument developed to match the treatment modality for maximum sensitivity. The House-Tree-Person PD Scale (HTP-PDS) assesses motoric and visuospatial processing–core PD symptoms–as well as cognition (thought and logic), affect/mood, motivation, self (including body-image, self-image, and self- efficacy), interpersonal functioning, creativity, and overall level of functioning. It was hypothesized that art therapy will ameliorate core PD symptoms and that this will correlate with improvements in all other variables. Results: HTP-PDS scores across all symptoms and variables improved significantly, though causality among variables was indeterminate. Discussion: Art therapy is a clinically efficacious complementary treatment for PD. Further research is warranted to disentangle causal pathways among the aforementioned variables, and additionally, to isolate and examine the multiple, discrete healing mechanisms believed to operate simultaneously in art therapy

    Dark sectors 2016 Workshop: community report

    Get PDF
    This report, based on the Dark Sectors workshop at SLAC in April 2016, summarizes the scientific importance of searches for dark sector dark matter and forces at masses beneath the weak-scale, the status of this broad international field, the important milestones motivating future exploration, and promising experimental opportunities to reach these milestones over the next 5-10 years
    corecore