40 research outputs found

    Academic Achievements (April 2015~March 2016)

    Get PDF

    Development of a model based on virtual reality for the evaluation of behavioral compliance with warnings and wayfinding contexts

    Get PDF
    Virtual Reality (VR) when framed in adequate methodologies, has an ample field of application for Ergonomics and for Design, since it allows to analyze and understand how people interaction with simulated situations in Virtual Environments (VEs). As such, it is of extreme importance for research and the practice of Ergonomics, to understand how it is possible to optimize, create, implement and evaluate solutions based in VEs in different contexts, including dangerous one, in particular those that can place in risk the physical integrity of people. These VEs can be used to study the Human behavior in critical situations, which is important when projecting products and systems that involve dangers to the users that would be difficult to study otherwise. In this context, this project has as its general objective the study of the factors that influence the development of VEs for VR and in the implementation of solutions (with a focus on the software and hardware) that better can correspond to the development of this type of studies, namely in studies of behavioral compliance with warnings and in studies of wayfinding. The methodological proposal described in this document focuses in a User-Centered Design (UCD) perspective, which involved the participation of the users, in the different phases of development of the project. As a result, it was developed and evaluated software and hardware solutions for the understanding and evaluation of the factors associated to the study of Human behavior, namely in behavioral compliance with warnings and in wayfinding contexts. It was also studied the best solutions for interaction and navigation in VEs, that correspond to high levels of presence, which is a fundamental aspect in behavioral compliance with warnings and wayfinding studies that use VR as a support tool. With this purpose, two navigational interfaces were developed (i.e., Balance Board and Walk-in-Place), also in a UCD perspective, to guarantee a constant cycle of tests and improvement of the implementations among the users. A comparative study was made between these two navigational interfaces and another that is commonly used in studies with VR (i.e., a Joystick). This comparative study was conducted in a context of evaluation of behavioral compliance with warnings and performance variables were analyzed, as well as the levels of presence in the different navigational interfaces. There were no statistically significant differences in the levels of presence or in the behavioral compliance between the three navigational interfaces. However, statistically significant differences were found in several performance variables (e.g., average speed, total distance). Future directions for the research are also discussed.A Realidade Virtual (RV) quando enquadrada em metodologias adequadas, tem um campo de aplicação alargado para a Ergonomia e o Design, visto permitir analisar e compreender como as pessoas interagem com situações simuladas em Ambientes Virtuais (AVs). Desta forma, é de extrema importância para a investigação ou prática da Ergonomia, perceber como se pode optimizar, construir, implementar e avaliar soluções baseadas em AVs em diferentes contextos, incluindo contextos perigosos, particularmente aqueles que podem colocar em risco a integridade física das pessoas. Estes AVs podem ser usados para estudar o comportamento Humano em situações críticas, o que é importante quando se projecta produtos e sistemas que envolvam perigos para os utilizadores que de outra forma seria muito difícil avaliar. Neste contexto, este projecto tem como objectivo geral o estudo dos factores que influenciam o desenvolvimento de ambientes para Realidade Virtual e na implementação de soluções (com um foco maior no software e hardware) que melhor possam corresponder ao desenvolvimento deste tipo de estudos, nomeadamente em estudos de consonância comportamental com avisos de segurança e estudos de wayfinding. A proposta metodológica descrita neste documento foca-se numa perspectiva de Design Centrado no Utilizador (DCU), que envolveu a participação dos utilizadores, nas várias fases de desenvolvimento do projecto. Como resultado, desenvolveu-se e avaliou-se soluções de software e hardware para a compreensão e avaliação dos factores associados ao estudo do comportamento Humano, nomeadamente para a consonância comportamental com avisos de segurança e para situações de wayfinding. Foram também estudadas as melhores soluções para interacção e navegação em AVs, que correspondam a níveis de presença elevados, aspecto fundamental em estudos de consonância comportamental com avisos de segurança e em estudos de wayfinding que usam RV. Com este intuito, foram desenvolvidas duas interfaces de navegação para Realidade Virtual (i.e., Balance Board e Walk-in-Place), também numa perspectiva de DCU, para garantir um constante ciclo de testes e aperfeiçoamento das implementações junto dos utilizadores. Foi realizado um estudo comparativo entre estas duas interfaces de navegação e uma outra que é utilizada mais frequentemente em estudos com RV (i.e., um Joystick). Este estudo comparativo realizou-se num contexto de avaliação da consonância comportamental com avisos de segurança e foram analisadas variáveis de desempenho, assim como os níveis de presença das diferentes interfaces de navegação. Não se observaram diferenças estatisticamente significativas em relação aos níveis de presença nem em relação à consonância comportamental entre as três interfaces de navegação. No entanto, foram encontradas diferenças estatisticamente significativas em várias variáveis de desempenho (e.g., velocidade média, distância percorrida). Também são discutidas as possíveis linhas de investigação de continuação ao trabalho

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    Driving Manoeuvre Recognition using Mobile Sensors

    Get PDF
    Automobiles are integral in today's society as they are used for transportation, commerce, and public services. The ubiquity of automotive transportation creates a demand for active safety technologies for the consumer. Recently, the widespread use and improved sensing and computing capabilities of mobile platforms have enabled the development of systems that can measure, detect, and analyze driver behaviour. Most systems performing driver behaviour analysis depend on recognizing driver manoeuvres. Improved accuracy in manoeuvre detection has the potential to improve driving safety, through applications such as monitoring for insurance, the detection of aggressive, distracted or fatigued driving, and for new driver training. This thesis develops algorithms for estimating vehicle kinematics and recognizing driver manoeuvres using a smartphone device. A kinematic model of the car is first introduced to express the vehicle's position and orientation. An Extended Kalman Filter (EKF) is developed to estimate the vehicle's positions, velocities, and accelerations using mobile measurements from inertial measurement units and the Global Positioning System (GPS). The approach is tested in simulation and validated on trip data using an On-board Diagnostic (OBD) device as the ground truth. The 2D state estimator is demonstrated to be an effective filter for measurement noise. Manoeuvre recognition is then formulated as a time-series classification problem. To account for an arbitrary orientation of the mobile device with respect to the vehicle, a novel method is proposed to estimate the phone's rotation matrix relative to the car using PCA on the gyroscope signal. Experimental results demonstrate that e Principal Component (PC) corresponds to a frame axis in the vehicle reference frame, so that the PCA projection matrix can be used to align the mobile device measurement data to the vehicle frame. A major impediment to classifier-manoeuvre recognition is the need for training data, specifically collecting enough data and generating an accurate ground truth. To address this problem, a novel training process is proposed to train the classifier using only simulation data. Training on simulation data bypasses these two issues as data can be cheaply generated and the ground truth is known. In this thesis, a driving simulator is developed using a Markov Decision Process (MDP) to generate simulated data for classifier training. Following training data generation, feature selection is performed using simple features such as velocity and angular velocity. A manoeuvre segmentation classifier is trained using multi-class SVMs. Validation was performed using data collected from driving sessions. A grid search was employed for parameter tuning. The classifier was found to have a 0.8158 average precision rate and a 0.8279 average recall rate across all manoeuvres resulting in an average F1 score of 0.8194 on the dataset

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Fully Unsupervised Image Denoising, Diversity Denoising and Image Segmentation with Limited Annotations

    Get PDF
    Understanding the processes of cellular development and the interplay of cell shape changes, division and migration requires investigation of developmental processes at the spatial resolution of single cell. Biomedical imaging experiments enable the study of dynamic processes as they occur in living organisms. While biomedical imaging is essential, a key component of exposing unknown biological phenomena is quantitative image analysis. Biomedical images, especially microscopy images, are usually noisy owing to practical limitations such as available photon budget, sample sensitivity, etc. Additionally, microscopy images often contain artefacts due to the optical aberrations in microscopes or due to imperfections in camera sensor and internal electronics. The noisy nature of images as well as the artefacts prohibit accurate downstream analysis such as cell segmentation. Although countless approaches have been proposed for image denoising, artefact removal and segmentation, supervised Deep Learning (DL) based content-aware algorithms are currently the best performing for all these tasks. Supervised DL based methods are plagued by many practical limitations. Supervised denoising and artefact removal algorithms require paired corrupted and high quality images for training. Obtaining such image pairs can be very hard and virtually impossible in most biomedical imaging applications owing to photosensitivity and the dynamic nature of the samples being imaged. Similarly, supervised DL based segmentation methods need copious amounts of annotated data for training, which is often very expensive to obtain. Owing to these restrictions, it is imperative to look beyond supervised methods. The objective of this thesis is to develop novel unsupervised alternatives for image denoising, and artefact removal as well as semisupervised approaches for image segmentation. The first part of this thesis deals with unsupervised image denoising and artefact removal. For unsupervised image denoising task, this thesis first introduces a probabilistic approach for training DL based methods using parametric models of imaging noise. Next, a novel unsupervised diversity denoising framework is presented which addresses the fundamentally non-unique inverse nature of image denoising by generating multiple plausible denoised solutions for any given noisy image. Finally, interesting properties of the diversity denoising methods are presented which make them suitable for unsupervised spatial artefact removal in microscopy and medical imaging applications. In the second part of this thesis, the problem of cell/nucleus segmentation is addressed. The focus is especially on practical scenarios where ground truth annotations for training DL based segmentation methods are scarcely available. Unsupervised denoising is used as an aid to improve segmentation performance in the presence of limited annotations. Several training strategies are presented in this work to leverage the representations learned by unsupervised denoising networks to enable better cell/nucleus segmentation in microscopy data. Apart from DL based segmentation methods, a proof-of-concept is introduced which views cell/nucleus segmentation from the perspective of solving a label fusion problem. This method, through limited human interaction, learns to choose the best possible segmentation for each cell/nucleus using only a pool of diverse (and possibly faulty) segmentation hypotheses as input. In summary, this thesis seeks to introduce new unsupervised denoising and artefact removal methods as well as semi-supervised segmentation methods which can be easily deployed to directly and immediately benefit biomedical practitioners with their research

    Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios

    Get PDF
    This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.In dieser Arbeit werden blickbasierte Benutzerschnittstellen untersucht, die Infor- mationen ¨uber das Blickfeld des Benutzers in multimodale Systeme integrieren, um neuartige Interaktionen mit der Umgebung zu erm¨oglichen. Wir untersuchen zwei neue Modalit¨aten: Blickeingabe und Ausgaben im peripheren Sichtfeld. Alle Modalit¨aten werden im gesamten Spektrum des Mixed-Reality-Kontinuums betra- chtet. Wir zeigen die Anwendung dieser neuen Formen der multimodalen Interak- tion in zwei wichtigen Dom¨anen auf: Fahrerassistenzsysteme und Werkerassistenz bei Mensch-Roboter-Kollaboration. Wir pr¨asentieren Experimente, die blickbasierte Benutzereingaben unter verschiedenen Bedingungen analysieren und helfen, ein 3D- Modell f¨ur das periphere Sehen zu entwerfen. Dar¨uber hinaus stellt diese Arbeit mehrere neue Algorithmen f¨ur die blickbasierte Interaktion vor, wie die deiktis- che Referenz in mobilen Szenarien, die nicht-intrusive Benutzeridentifikation, oder die Nutzung des peripheren Sichtfeldes f¨ur neuartige multimodale Pr¨asentationen. Diese Algorithmen sind in eine Reihe von Software-Werkzeuge integriert, mit de- nen 15 Anwendungsf¨alle f¨ur intelligente Umgebungen implementiert wurden. Diese Demonstratoren decken ein breites Anwendungsspektrum ab: von der r¨aumlichen In- teraktionen aus einem fahrenden Auto heraus bis hin zu Mixed-Reality-Interaktionen zwischen Mensch-Roboter-Teams
    corecore