6 research outputs found
Conceitos e métodos para apoio ao desenvolvimento e avaliação de colaboração remota utilizando realidade aumentada
Remote Collaboration using Augmented Reality (AR) shows great
potential to establish a common ground in physically distributed
scenarios where team-members need to achieve a shared goal.
However, most research efforts in this field have been devoted to
experiment with the enabling technology and propose methods to
support its development. As the field evolves, evaluation and
characterization of the collaborative process become an essential,
but difficult endeavor, to better understand the contributions of AR.
In this thesis, we conducted a critical analysis to identify the main
limitations and opportunities of the field, while situating its maturity
and proposing a roadmap of important research actions. Next, a
human-centered design methodology was adopted, involving
industrial partners to probe how AR could support their needs
during remote maintenance. These outcomes were combined with
literature methods into an AR-prototype and its evaluation was
performed through a user study. From this, it became clear the
necessity to perform a deep reflection in order to better understand
the dimensions that influence and must/should be considered in
Collaborative AR. Hence, a conceptual model and a humancentered
taxonomy were proposed to foster systematization of
perspectives. Based on the model proposed, an evaluation
framework for contextualized data gathering and analysis was
developed, allowing support the design and performance of
distributed evaluations in a more informed and complete manner.
To instantiate this vision, the CAPTURE toolkit was created,
providing an additional perspective based on selected dimensions
of collaboration and pre-defined measurements to obtain “in situ”
data about them, which can be analyzed using an integrated
visualization dashboard. The toolkit successfully supported
evaluations of several team-members during tasks of remote
maintenance mediated by AR. Thus, showing its versatility and
potential in eliciting a comprehensive characterization of the added
value of AR in real-life situations, establishing itself as a generalpurpose
solution, potentially applicable to a wider range of
collaborative scenarios.Colaboração Remota utilizando Realidade Aumentada (RA)
apresenta um enorme potencial para estabelecer um entendimento
comum em cenários onde membros de uma equipa fisicamente
distribuídos precisam de atingir um objetivo comum. No entanto, a
maioria dos esforços de investigação tem-se focado nos aspetos
tecnológicos, em fazer experiências e propor métodos para apoiar
seu desenvolvimento. À medida que a área evolui, a avaliação e
caracterização do processo colaborativo tornam-se um esforço
essencial, mas difícil, para compreender as contribuições da RA.
Nesta dissertação, realizámos uma análise crítica para identificar
as principais limitações e oportunidades da área, ao mesmo tempo
em que situámos a sua maturidade e propomos um mapa com
direções de investigação importantes. De seguida, foi adotada uma
metodologia de Design Centrado no Humano, envolvendo
parceiros industriais de forma a compreender como a RA poderia
responder às suas necessidades em manutenção remota. Estes
resultados foram combinados com métodos da literatura num
protótipo de RA e a sua avaliação foi realizada com um caso de
estudo. Ficou então clara a necessidade de realizar uma reflexão
profunda para melhor compreender as dimensões que influenciam
e devem ser consideradas na RA Colaborativa. Foram então
propostos um modelo conceptual e uma taxonomia centrada no ser
humano para promover a sistematização de perspetivas. Com base
no modelo proposto, foi desenvolvido um framework de avaliação
para recolha e análise de dados contextualizados, permitindo
apoiar o desenho e a realização de avaliações distribuídas de
forma mais informada e completa. Para instanciar esta visão, o
CAPTURE toolkit foi criado, fornecendo uma perspetiva adicional
com base em dimensões de colaboração e medidas predefinidas
para obter dados in situ, que podem ser analisados utilizando o
painel de visualização integrado. O toolkit permitiu avaliar com
sucesso vários colaboradores durante a realização de tarefas de
manutenção remota apoiada por RA, permitindo mostrar a sua
versatilidade e potencial em obter uma caracterização abrangente
do valor acrescentado da RA em situações da vida real. Sendo
assim, estabelece-se como uma solução genérica, potencialmente
aplicável a uma gama diversificada de cenários colaborativos.Programa Doutoral em Engenharia Informátic
An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System
To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable grab bag of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold.
To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours
Deictic Teleassistance
We present a simple sign language for teleassistance inspired by the work of the physiologist Nicolai Bernstein and by recent psychophysical evidence in hand-eye coordination. In our approach, a teleoperator uses hand signs to guide an otherwise autonomous robot manipulator through a given task. Each sign signals a context switch and provides a hand-centered reference frame for the robot's servomotor routines. The signs are natural, such as pointing to an object to indicate the desire to reach toward it as well as the axis along which to reach. Following the lead of [Agre & Chapman 1987] we term these signs deictic from the Greek word for pointing to stress their indicative and relative nature. The task example is opening a door using a Utah/MIT hand mounted on a Puma 760 arm. The teleoperator wears an EXOS hand master and polhemus sensor. Three variations of nearest neighbor pattern classification are tested for online recognition of the sign language. The simplest, in which the oper..
Teleassistance: Using Deictic Gestures to Control Robot Action
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 1995. Simultaneously published in the Technical Report series.This thesis presents a bottom-up approach to understanding and extending robotic motor control by integrating human guidance. The focus is on dexterous manipulation using a Utah/MIT robot hand but the ideas apply to other robotic platforms as well. {\em Teleassistance} is a novel method of human/robot interaction in which the human operator uses a gestural sign language to guide an otherwise autonomous robot through a given task. The operator wears a glove that measures finger joint angles to relay the sign language. Each sign serves to orient the robot within the task action sequence by indicating the next perceptual sub-goal and a relative spatial basis. Teleassistance merges robotic servo loops with human cognition to alleviate the limitations of either full robot autonomy or full human control alone. The operator's gestures are {\em deictic}, from the Greek {\em deiktikos} meaning pointing or showing, because they circumscribe the possible interpretations of perceptual feedback to the current context and thereby allow the autonomous routines to perform with computational economy and without dependence on a detailed task model. Conversely, the use of symbolic gestures permits the operator to guide the robot strategically without many of the problems inherent to literal master/slave teleoperation, including non-anthropomorphic mappings, poor feedback, and reliance on a tight communication loop. The development of teleassistance stems from an analysis of autonomous control, in light of recent advances in manipulator technology. This work also presents a {\em qualitative}, context-sensitive control strategy that exploits the many degrees of freedom and compliance of dexterous manipulators. The qualitative strategy governs the underlying autonomous routines in teleassistance
Teleassistance: Contextual guidance for autonomous manipulation
We present teleassistance, a two-tiered control structure for robotic manipulation that combines the advantages of autonomy and teleoperation. At the top level, a teleoperator provides global, deictic references via a natural sign language. Each sign indicates the next action to perform and a relative and hand-centered coordinate frame in which to perform it. For example, the teleoperator may point to an object for reaching, or preshape the hand for grasping. At the lower level autonomous servo routines run within the reference frames provided. Teleassistance offers two benefits. First, the servo routines can position the robot in relative coordinates and interpret feedback within a constrained context. This significantly simplifies the computational load of the autonomous routines and requires only a sparse model of the task. Second, the operator’s actions are symbolic, conveying intent without requiring the person to literally control the robot. This helps to alleviate many of the problems inherent to teleoperation, including poor mappings between operator and robot physiology, reliance on a broad communication bandwidth, and the potential for robot damage when solely under remote control. To demonstrate the concept, a Utah/MIT hand mounted on a Puma 760 arm opens a door