4,693 research outputs found
Biosensing and Actuation—Platforms Coupling Body Input-Output Modalities for Affective Technologies
Research in the use of ubiquitous technologies, tracking systems and wearables within
mental health domains is on the rise. In recent years, affective technologies have gained
traction and garnered the interest of interdisciplinary fields as the research on such technologies
matured. However, while the role of movement and bodily experience to affective experience is
well-established, how to best address movement and engagement beyond measuring cues and signals
in technology-driven interactions has been unclear. In a joint industry-academia effort, we aim to
remodel how affective technologies can help address body and emotional self-awareness. We present
an overview of biosignals that have become standard in low-cost physiological monitoring and show
how these can be matched with methods and engagements used by interaction designers skilled in
designing for bodily engagement and aesthetic experiences. Taking both strands of work together offers
unprecedented design opportunities that inspire further research. Through first-person soma design,
an approach that draws upon the designer’s felt experience and puts the sentient body at the forefront,
we outline a comprehensive work for the creation of novel interactions in the form of couplings that
combine biosensing and body feedback modalities of relevance to affective health. These couplings lie
within the creation of design toolkits that have the potential to render rich embodied interactions to
the designer/user. As a result we introduce the concept of “orchestration”. By orchestration, we refer
to the design of the overall interaction: coupling sensors to actuation of relevance to the affective
experience; initiating and closing the interaction; habituating; helping improve on the users’ body
awareness and engagement with emotional experiences; soothing, calming, or energising, depending
on the affective health condition and the intentions of the designer. Through the creation of a
range of prototypes and couplings we elicited requirements on broader orchestration mechanisms.
First-person soma design lets researchers look afresh at biosignals that, when experienced through
the body, are called to reshape affective technologies with novel ways to interpret biodata, feel it,
understand it and reflect upon our bodies
NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS
Historically, communication implies the transfer of information between bodies, yet this
phenomenon is constantly adapting to new technological and cultural standards. In a
digital context, it’s commonplace to envision systems that revolve around verbal modalities.
However, behavioural analysis grounded in psychology research calls attention to
the emotional information disclosed by non-verbal social cues, in particular, actions that
are involuntary. This notion has circulated heavily into various interdisciplinary computing
research fields, from which multiple studies have arisen, correlating non-verbal
activity to socio-affective inferences. These are often derived from some form of motion
capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that
occur from inside the body.
This thesis proposes a motivation and methodology for using physiological sensory
data as an expressive resource for technology-mediated interactions. Initialised from a
thorough discussion on state-of-the-art technologies and established design principles
regarding this topic, then applied to a novel approach alongside a selection of practice
works to compliment this. We advocate for aesthetic experience, experimenting with
abstract representations. Atypically from prevailing Affective Computing systems, the
intention is not to infer or classify emotion but rather to create new opportunities for rich
gestural exchange, unconfined to the verbal domain.
Given the preliminary proposition of non-representation, we justify a correspondence
with modern Machine Learning and multimedia interaction strategies, applying an iterative,
human-centred approach to improve personalisation without the compromising
emotional potential of bodily gesture. Where related studies in the past have successfully
provoked strong design concepts through innovative fabrications, these are typically limited
to simple linear, one-to-one mappings and often neglect multi-user environments;
we foresee a vast potential. In our use cases, we adopt neural network architectures to
generate highly granular biofeedback from low-dimensional input data.
We present the following proof-of-concepts: Breathing Correspondence, a wearable
biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance;
and Anti-Social Distancing Ensemble, an installation for public space interventions,
analysing physical distance to generate a collective soundscape. Key findings are
extracted from the individual reports to formulate an extensive technical and theoretical
framework around this topic. The projects first aim to embrace some alternative perspectives
already established within Affective Computing research. From here, these concepts
evolve deeper, bridging theories from contemporary creative and technical practices with
the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação
entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões
tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram
em torno de modalidades verbais. Contudo, a análise comportamental fundamentada
na investigação psicológica chama a atenção para a informação emocional revelada por
sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou
fortemente em vários campos interdisciplinares de investigação na área das ciências da
computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal
com inferências sócio-afectivas. Estes são frequentemente derivados de alguma
forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas
“invisíveis” que ocorrem no interior do corpo.
Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais
fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia.
Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios
de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem,
juntamente com uma selecção de trabalhos práticos, para complementar esta.
Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente
aos sistemas de Computação Afectiva predominantes, a intenção não é inferir
ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual,
não confinada ao domínio verbal.
Dada a proposta preliminar de não representação, justificamos uma correspondência
com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma
abordagem iterativa e centrada no ser humano para melhorar a personalização sem o
potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores
demonstraram com sucesso conceitos de design fortes através de fabricações
inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um,
e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos
um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes
neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão.
Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema
de biofeedback wearable inspirado nos princípios de design somaestético; Latent
Steps, um modelo autoencoder em tempo real para representar experiências corporais
a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing
Ensemble, uma instalação para intervenções no espaço público, analisando a
distância física para gerar uma paisagem sonora colectiva. Os principais resultados são
extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado
para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas
perspectivas alternativas às que já estão estabelecidas no âmbito da investigação
da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente,
fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o
avanço das tecnologias biomédicas
Requirements Engineering in the Market Dialogue Phase of Public Procurement: A Case Study of an Innovation Partnership for Medical Technology
Context and Motivation:
In 2016, the European Union introduced ‘innovation partnerships’ to facilitate innovative development of the EU through public procurement. Requirements engineering is one of the main challenges in the public procurement of innovative products. Nevertheless, there is little empirical research on public procurement, particularly managing requirements in the pre-tender dialogue phase between potential suppliers and problem owners.
Question/Problem:
This paper investigates the market dialogue phase of an innovation partnership project in Norway. We aim to understand critical factors of the dialogue phase that clarify and focus needs and requirements. This leads to the research question: How can we clarify and focus needs and requirements for a new solution in the market dialogue phase?
Principal Ideas/Results:
We have conducted a case study at a major Norwegian hospital. The objective of this innovation partnership is to make the emergency room in a Norwegian hospital more efficient. The case study illustrates how requirements have been developed by the joint effort of the procurement team, the active engagement of potential suppliers, and the learning and mutual trust between them. By discussing the vision and getting feedback on opportunities and limitations in existing and projected technologies, the procurement team has refined their ambition and focused on the core of the innovation.
Contribution:
This paper contributes to the literature on requirement engineering in public procurement by describing how requirements are focused during the dialogue phase of an innovation partnership facilitated by a cross-functional procurement team with sufficient competencies, resources, and trust.publishedVersio
Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena
Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform
Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena
Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform
Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking
Continuous assessment of task difficulty and mental workload is essential in
improving the usability and accessibility of interactive systems. Eye tracking
data has often been investigated to achieve this ability, with reports on the
limited role of standard blink metrics. Here, we propose a new approach to the
analysis of eye-blink responses for automated estimation of task difficulty.
The core module is a time-frequency representation of eye-blink, which aims to
capture the richness of information reflected on blinking. In our first study,
we show that this method significantly improves the sensitivity to task
difficulty. We then demonstrate how to form a framework where the represented
patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent
neural networks for their non-linear mapping onto difficulty-related
parameters. This framework outperformed other methods that used hand-engineered
features. This approach works with any built-in camera, without requiring
specialized devices. We conclude by discussing how Rethinking Eye-blink can
benefit real-world applications.Comment: [Accepted version] In Proceedings of CHI Conference on Human Factors
in Computing Systems (CHI '21), May 8-13, 2021, Yokohama, Japan. ACM, New
York, NY, USA. 19 Pages. https://doi.org/10.1145/3411764.344557
Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking
Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications
- …