14 research outputs found

    Autonomous wheelchair with a smart driving mode and a Wi-Fi positioning system

    Get PDF
    Wheelchairs are an important aid that enhances the mobility of people with several types of disabilities. Therefore, there has been considerable research and development on wheelchairs to meet the needs of the disabled. Since the early manual wheelchairs to their more recent electric powered counterparts, advancements have focused on improving autonomy in mobility. Other developments, such as Internet advancements, have developed the concept of the Internet of Things (IoT). This is a promising area that has been studied to enhance the independent operation of the electrical wheelchairs by enabling autonomous navigation and obstacle avoidance. This dissertation describes shortly the design of an autonomous wheelchair of the IPL/IT (Instituto Politécnico de Leiria/Instituto de Telecomunicações) with smart driving features for persons with visual impairments. The objective is to improve the prototype of an intelligent wheelchair. The first prototype of the wheelchair was built to control it by voice, ocular movements, and GPS (Global Positioning System). Furthermore, the IPL/IT wheelchair acquired a remote control feature which could prove useful for persons with low levels of visual impairment. This tele-assistance mode will be helpful to the family of the wheelchair user or, simply, to a health care assistant. Indoor and outdoor positioning systems, with printed directional Wi-Fi antennas, have been deployed to enable a precise location of our wheelchair. The underlying framework for the wheelchair system is the IPL/IT low cost autonomous wheelchair prototype that is based on IoT technology for improved affordability

    Investigating the effects of controlled language on the reading and comprehension of machine translated texts: A mixed-methods approach

    Get PDF
    This study investigates whether the use of controlled language (CL) improves the readability and comprehension of technical support documentation produced by a statistical machine translation system. Readability is operationalised here as the extent to which a text can be easily read in terms of formal linguistic elements; while comprehensibility is defined as how easily a text’s content can be understood by the reader. A biphasic mixed-methods triangulation approach is taken, in which a number of quantitative and qualitative evaluation methods are combined. These include: eye tracking, automatic evaluation metrics (AEMs), retrospective interviews, human evaluations, memory recall testing, and readability indices. A further aim of the research is to investigate what, if any, correlations exist between the various metrics used, and to explore the cognitive framework of the evaluation process. The research finds that the use of CL input results in significantly higher scores for items recalled by participants, and for several of the eye tracking metrics: fixation count, fixation length, and regressions. However, the findings show slight insignificant increases for readability indices and human evaluations, and slight insignificant decreases for AEMs. Several significant correlations between the above metrics are identified as well as predictors of readability and comprehensibility

    Performance of brain-computer communication in Amyotrophic Lateral Sclerosis

    Get PDF
    Amyotrophic Lateral Sclerosis (ALS) is a devastating condition which leads to the degeneration of motor neurons. It is a progressive disorder characterized by loss of mobility and verbal communication (Chaudhary et al., 2015). In 50% of the patients life expectancy estimates are 3-5 years after first symptoms’ onset (Bensimon et al., 1994). However, if patients opt for artificial respiration and feeding life expectancy can be relatively healthy with optimal care. A percentage around 50% of patients present mild to severe cognitive impairment (Ringholz et al., 2005). Since first attempts in the nineties (Birbaumer et al., 1999) brain-computer interface (BCI) systems have been successfully developed to secure communication with social environment in the late stages of the disease. However, BCI-systems in ALS do not reach 100% correct classification accuracy and in some case results are below chance level (McCane et al., 2014). The latter phenomenon is known as “BCI illiteracy” while the former one is generally ascribed to attentive issues, specific functional impairment, motivational factors and/or artefacts of the neurophysiological signals. Our view relies on a more complex picture where many factors account for sub-optimal results, especially in the completely locked-in state (CLIS) when lack of communication is crucial. We will explore the critical factors determining sub-optimal BCI-performance, namely: (i) alteration of cognitive and/or emotional/behavioural states (Martens et al., 2014) such as vigilance/attention (Mak et al., 2012; De Massari et al., 2013), (ii) mild cognitive impairment (Volpato et al., 2016), (iii) the “extinction-of-goal-directed-thought” hypothesis (Kübler & Birbaumer, 2008), (iv) circadian rhythm and sleep disorders (Soekadar et al., 2013) and (v) visual sensory domain alterations (Murguialday et al., 2011). These complementary factors suggest the integration of theoretical background on learning principles (Skinner, 1953), advanced technology (Gallegos-Ayala et al., 2014), multiple neural signals recording (Chaudhary et al., 2017) and vigilance/attention monitoring (De Massari et al., 2013; Silvoni et al., 2016) to reliably solve the communication problem in advanced ALS stages

    Development of an augmented reality guided computer assisted orthopaedic surgery system

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively

    DESIGN OF PORTABLE LED VISUAL STIMULUS AND SSVEP ANALYSIS FOR VISUAL FATIGUE REDUCTION AND IMPROVED ACCURACY

    Get PDF
    Brain-computer interface (BCI) applications have emerged as an innovative communication channel between computers and human brain as it circumvents peripheral limbs thereby creating a direct interface between brain thoughts and the external world. This research focuses on non-invasive BCI to improve the design of visual stimuli in eliciting steady-state visual evoked potential (SSVEP) for BCI applications. To evoke SSVEP in the brain, the user needs to focus on a visual stimulus flickering at a constant frequency. Traditionally in research studies, the visual stimulus for SSVEP uses LCD screens where the flicker is generated using black or white patterns, which alternates the colour to produce a flickering effect. However, there are drawbacks for LCD based visual stimuli systems that limit the user acceptance of SSVEP applications. The main limitations are: (i) choice of flicker frequency is limited to the LCDs vertical refresh rate (ii) flickers are mainly limited to black/white patterns (iii) higher visual fatigue for the user due to LCDs background flicker (iv) reduced visual stimulus portability (v) Inaccurate flickers generated and controlled by the software (vi) influence of adjacent flickers causing attention shift when multiple flickers are used for classification and also not being easily adaptable for user requirements. The impediments in eliciting and utilising SSVEP responses for designing a near real-time platform for controlling external applications are addressed from five main perspectives here: (i) design of standalone LED visual stimulus hardware for precise generation of any frequency for replacing the LCD based visual stimulus (ii) eliciting maximal response by choosing most responsive colour, orientation and shape of visual stimulus (iii) identification of the best luminance level for visual stimulus to improve the comfortability of the user and for improved SSVEP response (iv) control of the duration of ON/OFF period for the visual stimulus to reduce eyestrain for the user (i.e. visual fatigue), and (v) hybrid BCI paradigm using SSVEP and P300 to improve the classification accuracy for controlling external applications. The experimental study involved the development of various visual stimulus designs based on LEDs and microcontrollers to minimise the visual fatigue and improve the SSVEP responses. The signal analysis results from the studies with five to ten participants show SSVEP elicitation is influenced by colour, orientation, the shape of stimulus, the luminance level of stimulus and the duration of ON/OFF period for the stimulus. The participants also commented that choosing the correct luminance and ON/OFF periods of the stimulus considerably reduce the eyestrain, improve the attention levels and reduce the visual fatigue. Taken together, these finding leads to more user acceptance in SSVEP based BCI as an assistive mechanism for controlling external applications with improved comfort, portability and reduced visual fatigue

    Engineering data compendium. Human perception and performance, volume 3

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design of military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by system designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is Volume 3, containing sections on Human Language Processing, Operator Motion Control, Effects of Environmental Stressors, Display Interfaces, and Control Interfaces (Real/Virtual)

    Life Sciences Program Tasks and Bibliography for FY 1996

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1996. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive Internet web page
    corecore