157 research outputs found
Ubiquitous computing and natural interfaces for environmental information
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em
Engenharia do Ambiente, perfil Gestão e Sistemas AmbientaisThe next computing revolution‘s objective is to embed every street, building, room and object with computational power. Ubiquitous computing (ubicomp) will allow every object to receive and transmit information, sense its surroundings and act accordingly, be located from anywhere in the world, connect every person. Everyone will have the possibility to access information, despite their age, computer knowledge, literacy or physical impairment. It will impact the world in a profound way, empowering mankind, improving the environment, but will also create new challenges that our society, economy, health and global environment will have to overcome. Negative impacts have to be identified and dealt with in advance. Despite these concerns, environmental studies have been mostly absent from discussions on the new paradigm.
This thesis seeks to examine ubiquitous computing, its technological emergence, raise awareness towards future impacts and explore the design of new interfaces and rich interaction modes. Environmental information is approached as an area which may greatly benefit from ubicomp as a way to gather, treat and disseminate it, simultaneously complying with the Aarhus convention. In an educational context, new media are poised to revolutionize the way we perceive, learn and interact with environmental information. cUbiq is presented as a natural interface to access that information
Mobile and Low-cost Hardware Integration in Neurosurgical Image-Guidance
It is estimated that 13.8 million patients per year require neurosurgical interventions worldwide, be it for a cerebrovascular disease, stroke, tumour resection, or epilepsy treatment, among others. These procedures involve navigating through and around complex anatomy in an organ where damage to eloquent healthy tissue must be minimized. Neurosurgery thus has very specific constraints compared to most other domains of surgical care. These constraints have made neurosurgery particularly suitable for integrating new technologies. Any new method that has the potential to improve surgical outcomes is worth pursuing, as it has the potential to not only save and prolong lives of patients, but also increase the quality of life post-treatment. In this thesis, novel neurosurgical image-guidance methods are developed, making use of currently available, low-cost off-the-shelf components. In particular, a mobile device (e.g. smartphone or tablet) is integrated into a neuronavigation framework to explore new augmented reality visualization paradigms and novel intuitive interaction methods. The developed tools aim at improving image-guidance using augmented reality to improve intuitiveness and ease of use. Further, we use gestures on the mobile device to increase interactivity with the neuronavigation system in order to provide solutions to the problem of accuracy loss or brain shift that occurs during surgery. Lastly, we explore the effectiveness and accuracy of low-cost hardware components (i.e. tracking systems and ultrasound) that could be used to replace the current high cost hardware that are integrated into commercial image-guided neurosurgery systems. The results of our work show the feasibility of using mobile devices to improve neurosurgical processes. Augmented reality enables surgeons to focus on the surgical field while getting intuitive guidance information. Mobile devices also allow for easy interaction with the neuronavigation system thus enabling surgeons to directly interact with systems in the operating room to improve accuracy and streamline procedures. Lastly, our results show that low-cost components can be integrated into a neurosurgical guidance system at a fraction of the cost, while having a negligible impact on accuracy. The developed methods have the potential to improve surgical workflows, as well as democratize access to higher quality care worldwide
New generation of interactive platforms based on novel printed smart materials
Programa doutoral em Engenharia Eletrónica e de Computadores (área de Instrumentação e Microssistemas Eletrónicos)The last decade was marked by the computer-paradigm changing with other digital devices suddenly becoming available to the general public, such as tablets and smartphones. A shift in perspective from computer to materials as the centerpiece of digital interaction is leading to a diversification of interaction contexts, objects and applications, recurring to intuitive commands and dynamic content that can proportionate more interesting and satisfying experiences.
In parallel, polymer-based sensors and actuators, and their integration in different substrates or devices is an area of increasing scientific and technological interest, which current state of the art starts to permit the use of smart sensors and actuators embodied within the objects seamlessly. Electronics is no longer a rigid board with plenty of chips. New technological advances and perspectives now turned into printed electronics in polymers, textiles or paper. We are assisting to the actual scaling down of computational power into everyday use objects, a fusion of the computer with the material. Interactivity is being transposed to objects erstwhile inanimate.
In this work, strain and deformation sensors and actuators were developed recurring to functional polymer composites with metallic and carbonaceous nanoparticles (NPs) inks, leading to capacitive, piezoresistive and piezoelectric effects, envisioning the creation of tangible user interfaces (TUIs). Based on smart polymer substrates such as polyvinylidene fluoride (PVDF) or polyethylene terephthalate (PET), among others, prototypes were prepared using piezoelectric and dielectric technologies. Piezoresistive prototypes were prepared with resistive inks and restive functional polymers. Materials were printed by screen printing, inkjet printing and doctor blade coating. Finally, a case study of the integration of the different materials and technologies developed is presented in a book-form factor.A última década foi marcada por uma alteração do paradigma de computador pelo súbito aparecimento dos tablets e smartphones para o público geral. A alteração de perspetiva do computador para os materiais como parte central de interação digital levou a uma diversificação dos contextos de interação, objetos e aplicações, recorrendo a comandos intuitivos e conteúdos dinâmicos capazes de tornarem a experiência mais interessante e satisfatória.
Em simultâneo, sensores e atuadores de base polimérica, e a sua integração em diferentes substratos ou dispositivos é uma área de crescente interesse científico e tecnológico, e o atual estado da arte começa a permitir o uso de sensores e atuadores inteligentes perfeitamente integrados nos objetos. Eletrónica já não é sinónimo de placas rígidas cheias de componentes. Novas perspetivas e avanços tecnológicos transformaram-se em eletrónica impressa em polímeros, têxteis ou papel. Neste momento estamos a assistir à redução da computação a objetos do dia a dia, uma fusão do computador com a matéria. A interatividade está a ser transposta para objetos outrora inanimados.
Neste trabalho foram desenvolvidos atuadores e sensores e de pressão e de deformação com recurso a compostos poliméricos funcionais com tintas com nanopartículas (NPs) metálicas ou de base carbónica, recorrendo aos efeitos capacitivo, piezoresistivo e piezoelétrico, com vista à criação de interfaces de usuário tangíveis (TUIs). Usando substratos poliméricos inteligentes tais como fluoreto de polivinilideno (PVDF) ou politereftalato de etileno (PET), entre outos, foi possível a preparação de protótipos de tecnologia piezoelétrica ou dielétrica. Os protótipos de tecnologia piezoresistiva foram feitos com tintas resistivas e polímeros funcionais resistivos. Os materiais foram impressos por serigrafia, jato de tinta, impressão por aerossol e revestimento de lâmina doctor blade. Para terminar, é apresentado um caso de estudo da integração dos diferentes materiais e tecnologias desenvolvidos sob o formato de um livro.This project was supported by FCT – Fundação para a Ciência e a Tecnologia, within the doctorate
grant with reference SFRH/BD/110622/2015, by POCH – Programa Operacional Capital Humano, and
by EU – European Union
Capacitive Sensing and Communication for Ubiquitous Interaction and Environmental Perception
During the last decade, the functionalities of electronic devices within a living environment constantly increased. Besides the personal computer, now tablet PCs, smart household appliances, and smartwatches enriched the technology landscape. The trend towards an ever-growing number of computing systems has resulted in many highly heterogeneous human-machine interfaces. Users are forced to adapt to technology instead of having the technology adapt to them. Gathering context information about the user is a key factor for improving the interaction experience. Emerging wearable devices show the benefits of sophisticated sensors which make interaction more efficient, natural, and enjoyable. However, many technologies still lack of these desirable properties, motivating me to work towards new ways of sensing a user's actions and thus enriching the context. In my dissertation I follow a human-centric approach which ranges from sensing hand movements to recognizing whole-body interactions with objects.
This goal can be approached with a vast variety of novel and existing sensing approaches. I focused on perceiving the environment with quasi-electrostatic fields by making use of capacitive coupling between devices and objects. Following this approach, it is possible to implement interfaces that are able to recognize gestures, body movements and manipulations of the environment at typical distances up to 50cm. These sensors usually have a limited resolution and can be sensitive to other conductive objects or electrical devices that affect electric fields. The technique allows for designing very energy-efficient and high-speed sensors that can be deployed unobtrusively underneath any kind of non-conductive surface. Compared to other sensing techniques, exploiting capacitive coupling also has a low impact on a user's perceived privacy.
In this work, I also aim at enhancing the interaction experience with new perceptional capabilities based on capacitive coupling. I follow a bottom-up methodology and begin by presenting two low-level approaches for environmental perception. In order to perceive a user in detail, I present a rapid prototyping toolkit for capacitive proximity sensing. The prototyping toolkit shows significant advancements in terms of temporal and spatial resolution. Due to some limitations, namely the inability to determine the identity and fine-grained manipulations of objects, I contribute a generic method for communications based on capacitive coupling. The method allows for designing highly interactive systems that can exchange information through air and the human body. I furthermore show how human body parts can be recognized from capacitive proximity sensors. The method is able to extract multiple object parameters and track body parts in real-time. I conclude my thesis with contributions in the domain of context-aware devices and explicit gesture-recognition systems
Haptics: Science, Technology, Applications
This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility
Development of operator interfaces for a heavy maintenance manipulator
This dissertation details the development of an intuitive operator interface for a complex serial manipulator, to be used in heavy maintenance tasks. This interface
allows the operator to control the manipulator in the 'task-space', with software handling the conversion to 'joint-space'. Testing of the interfaces shows operator
task-space control to be most effective in reducing operator workload and improving the ease of use of a complex machine. These methods are applicable in concept, to a wider range of manipulators and other machines.
A number of operator interfaces were developed: a Joystick Interface, a Master Arm interface and a 6-D Mouse Interface. The Joystick Interface made use of a task space
to joint space transformation implemented in software. The Master Arm utilised a scale model to conduct the transformation. Finally, a 3D mouse Interface utilised
sensors in an Android Device with a software based task to joint space transformation. These interfaces were tested and the Joystick Interface proved most suitable according to the operator's subjective opinion. Quantitative measurement also showed that it accurately reproduced the operator's commands.
The software transformation developed for the Joystick and 6-D Mouse interfaces utilised the Jacobian Matrix to complete the task-space to joint-space conversion.
However, since the manipulator contained a redundant joint, an additional algorithm was required to handle the redundancy. This additional algorithm also improved
manipulator safety, as it navigated the arm away from singularities which could result in large joint movement. The novelty of this algorithm is based on its pragmatic approach, and could be modified to achieve a number of safety or performance goals.
The control strategy centred on the operator specifying commands to the arm in the frame of the task. The developed algorithm enabled the control strategy by ensuring
that viable solutions for joint velocity could be found in a manipulator that has redundant joints. Furthermore, this algorithm utilised a cost function that minimised
the chances of large joint movements due to singularities, improving the safety of the device.
Overall, the project has delivered a viable operator interface for controlling a complex, redundant manipulator. This interface was tested against a number of
alternate operator interfaces. The contrasting results of the strengths and weaknesses of various interfaces meant that a number of key insights were gained, and a
pragmatic approach to redundancy management was developed
Multimodal interactions in virtual environments using eye tracking and gesture control.
Multimodal interactions provide users with more natural ways to interact with virtual environments than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform virtual content selection and manipulation conveniently through the use of a combination of gaze and other hand control techniques/pointing devices, in this thesis, mid-air gestures. To establish a synergy between the two modalities and evaluate the affordance of this novel multimodal interaction technique, it is important to understand their behavioural patterns and relationship, as well as any possible perceptual conflicts and interactive ambiguities. More specifically, evidence shows that eye movements lead hand movements but the question remains that whether the leading relationship is similar when interacting using a pointing device. Moreover, as gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on users perception on the exact spatial mapping between the virtual space and the physical space. It raises an underexplored issue that whether gaze can introduce misalignment of the spatial mapping and lead to users misperception and interactive errors. Furthermore, the accuracy of eye tracking and mid-air gesture control are not comparable with the traditional pointing techniques (e.g., mouse) yet. This may cause pointing ambiguity when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This thesis addresses these concerns through experimental studies and theoretical analysis that involve paradigm design, development of interactive prototypes, and user study for verification of assumptions, comparisons and evaluations. Substantial data sets were obtained and analysed from each experiment. The results conform to and extend previous empirical findings that gaze leads pointing devices movements in most cases both spatially and temporally. It is testified that gaze does introduce spatial misperception and three methods (Scaling, Magnet and Dual-gaze) were proposed and proved to be able to reduce the impact caused by this perceptual conflict where Magnet and Dual-gaze can deliver better performance than Scaling. In addition, a coarse-to-fine solution is proposed and evaluated to compensate the degradation introduced by eye tracking inaccuracy, which uses a gaze cone to detect ambiguity followed by a gaze probe for decluttering. The results show that this solution can enhance the interaction accuracy but requires a compromise on efficiency. These findings can be used to inform a more robust multimodal inter- face design for interactions within virtual environments that are supported by both eye tracking and mid-air gesture control. This work also opens up a technical pathway for the design of future multimodal interaction techniques, which starts from a derivation from natural correlated behavioural patterns, and then considers whether the design of the interaction technique can maintain perceptual constancy and whether any ambiguity among the integrated modalities will be introduced
Expanding tangible tabletop interfaces beyond the display
L’augment
de
popularitat
de
les
taules
i
superfícies
interactives
està
impulsant
la
recerca
i
la
innovació
en
una
gran
varietat
d’àrees,
incloent-‐hi
maquinari,
programari,
disseny
de
la
interacció
i
noves
tècniques
d’interacció.
Totes,
amb
l’objectiu
de
promoure
noves
interfícies
dotades
d’un
llenguatge
més
ric,
potent
i
natural.
Entre
totes
aquestes
modalitats,
la
interacció
combinada
a
sobre
i
per
damunt
de
la
superfície
de
la
taula
mitjançant
tangibles
i
gestos
és
actualment
una
àrea
molt
prometedora.
Aquest
document
tracta
d’expandir
les
taules
interactives
més
enllà
de
la
superfície
per
mitjà
de
l’exploració
i
el
desenvolupament
d’un
sistema
o
dispositiu
enfocat
des
de
tres
vessants
diferents:
maquinari,
programari
i
disseny
de
la
interacció.
Durant
l’inici
d’aquest
document
s’estudien
i
es
resumeixen
els
diferents
trets
característics
de
les
superfícies
interactives
tangibles
convencionals
o
2D
i
es
presenten
els
treballs
previs
desenvolupats
per
l’autor
en
solucions
de
programari
que
acaben
resultant
en
aplicacions
que
suggereixen
l’ús
de
la
tercera
dimensió
a
les
superfícies
tangibles.
Seguidament,
es
presenta
un
repàs
del
maquinari
existent
en
aquest
tipus
d’interfícies
per
tal
de
concebre
un
dispositiu
capaç
de
detectar
gestos
i
generar
visuals
per
sobre
de
la
superfície,
per
introduir
els
canvis
realitzats
a
un
dispositiu
existent,
desenvolupat
i
cedit
per
Microsoft
Reseach
Cambridge.
Per
tal
d’explotar
tot
el
potencial
d’aquest
nou
dispositiu,
es
desenvolupa
un
nou
sistema
de
visió
per
ordinador
que
estén
el
seguiment
d’objectes
i
mans
en
una
superfície
2D
a
la
detecció
de
mans,
dits
i
etiquetes
amb
sis
graus
de
llibertat
per
sobre
la
superfície
incloent-‐hi
la
interacció
tangible
i
tàctil
convencional
a
la
superfície.
Finalment,
es
presenta
una
eina
de
programari
per
a
generar
aplicacions
per
al
nou
sistema
i
es
presenten
un
seguit
d’aplicacions
per
tal
de
provar
tot
el
desenvolupament
generat
al
llarg
de
la
tesi
que
es
conclou
presentant
un
seguit
de
gestos
tant
a
la
superfície
com
per
sobre
d’aquesta
i
situant-‐los
en
una
nova
classificació
que
alhora
recull
la
interacció
convencional
2D
i
la
interacció
estesa
per
damunt
de
la
superfície
desenvolupada.The
rising
popularity
of
interactive
tabletops
and
surfaces
is
spawning
research
and
innovation
in
a
wide
variety
of
areas,
including
hardware
and
software
technologies,
interaction
design
and
novel
interaction
techniques,
all
of
which
seek
to
promote
richer,
more
powerful
and
more
natural
interaction
modalities.
Among
these
modalities,
combined
interaction
on
and
above
the
surface,
both
with
gestures
and
with
tangible
objects,
is
a
very
promising
area.
This
dissertation
is
about
expanding
tangible
and
tabletops
surfaces
beyond
the
display
by
exploring
and
developing
a
system
from
the
three
different
perspectives:
hardware,
software,
and
interaction
design.
This
dissertation,
studies
and
summarizes
the
distinctive
affordances
of
conventional
2D
tabletop
devices,
with
a
vast
literature
review
and
some
additional
use
cases
developed
by
the
author
for
supporting
these
findings,
and
subsequently
explores
the
novel
and
not
yet
unveiled
potential
affordances
of
3D-‐augmented
tabletops.
It
overviews
the
existing
hardware
solutions
for
conceiving
such
a
device,
and
applies
the
needed
hardware
modifications
to
an
existing
prototype
developed
and
rendered
to
us
by
Microsoft
Research
Cambridge.
For
accomplishing
the
interaction
purposes,
it
is
developed
a
vision
system
for
3D
interaction
that
extends
conventional
2D
tabletop
tracking
for
the
tracking
of
hand
gestures,
6DoF
markers
and
on-‐surface
finger
interaction.
It
finishes
by
conceiving
a
complete
software
framework
solution,
for
the
development
and
implementation
of
such
type
of
applications
that
can
benefit
from
these
novel
3D
interaction
techniques,
and
implements
and
test
several
software
prototypes
as
proof
of
concepts,
using
this
framework.
With
these
findings,
it
concludes
presenting
continuous
tangible
interaction
gestures
and
proposing
a
novel
classification
for
3D
tangible
and
tabletop
gestures
To what extent does object knowledge bias the perception of goal-directed actions?
Predictive processing accounts of action understanding suggest that inferred goals generate top-down predictions that bias perception towards expected goals. These predictions are thought to be derived, in part, from the affordances of available objects. This thesis had three aims: (1) to test whether high-level action goals based on object knowledge can bias action perception, (2) to investigate the degree to which this perceptual bias can be influenced by high-level person knowledge, or by expertise in particular objects, (3) to explore the low-level mechanisms underlying the anticipatory representation of action goals associated with objects. Experiments used a modified representational momentum paradigm, as well as RT-based measures. In Chapter 2, we found that the presentation of a prime object led to a predictive bias in the perception of a subsequent action towards a functionally related target object. This bias was present for reaching actions, but not withdrawing actions (Experiment 1a) and persisted even when the functionally related target was simultaneously presented with an unrelated distractor (Experiment 1b). Crucially, this effect was specific to intentional actions, but was eliminated when the hand was replaced by a non-biological object following the same trajectory. This finding supports predictive processing views that action perception is guided by goal predictions, based on prior knowledge about the context in which the action occurs. We found no evidence that this perceptual bias could be influenced by prior knowledge about the gender of the actor (Chapter 3) or by participants' expertise in particular objects (Chapter 4). Chapter 5 tested for motor biases resulting from object-based goal predictions. Originally designed as a TMS study (Experiment 4a), this was tested online using RT measures as an index for motor preparation. We found no evidence that object affordances can be reliably measured using online RTs. Taken together these findings highlight the important role of object knowledge in action perception, while showing the limits to which this might be modulated by person knowledge and expertise. The final chapter highlights the challenges of developing robust behavioural measures for online testing of object affordances
Optical Methods in Sensing and Imaging for Medical and Biological Applications
The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled ‘Optical Methods in Sensing and Imaging for Medical and Biological Applications’, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject
- …