130 research outputs found
Data-Driven Evaluation of In-Vehicle Information Systems
Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens.
In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs.
In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics.
In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior.
Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions
Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback
The voice is body and instrument. Third-person interpretation of the voice by listeners, vocal teachers, and digital agents is centred largely around audio feedback. For a vocalist, physical feedback from within the body provides an additional interaction. The vocalist’s understanding of their multi-sensory experiences is through tacit knowledge of the body. This knowledge is difficult to articulate, yet awareness and control of the body are innate. In the ever-increasing emergence of technology which quantifies or interprets physiological processes, we must remain conscious also of embodiment and human perception of these processes. Focusing on the vocalist-voice relationship, this thesis expands knowledge of human interaction and how technology influences our perception of our bodies. To unite these different perspectives in the vocal context, I draw on mixed methods from cog- nitive science, psychology, music information retrieval, and interactive system design. Objective methods such as vocal audio analysis provide a third-person observation. Subjective practices such as micro-phenomenology capture the experiential, first-person perspectives of the vocalists them- selves. Quantitative-qualitative blend provides details not only on novel interaction, but also an understanding of how technology influences existing understanding of the body. I worked with vocalists to understand how they use their voice through abstract representations, use mental imagery to adapt to altered auditory feedback, and teach fundamental practice to others. Vocalists use multi-modal imagery, for instance understanding physical sensations through auditory sensations. The understanding of the voice exists in a pre-linguistic representation which draws on embodied knowledge and lived experience from outside contexts. I developed a novel vocal interaction method which uses measurement of laryngeal muscular activations through surface electromyography. Biofeedback was presented to vocalists through soni- fication. Acting as an indicator of vocal activity for both conscious and unconscious gestures, this feedback allowed vocalists to explore their movement through sound. This formed new perceptions but also questioned existing understanding of the body. The thesis also uncovers ways in which vocalists are in control and controlled by, work with and against their bodies, and feel as a single entity at times and totally separate entities at others. I conclude this thesis by demonstrating a nuanced account of human interaction and perception of the body through vocal practice, as an example of how technological intervention enables exploration and influence over embodied understanding. This further highlights the need for understanding of the human experience in embodied interaction, rather than solely on digital interpretation, when introducing technology into these relationships
Adaptive Automated Machine Learning
The ever-growing demand for machine learning has led to the development of automated machine learning (AutoML) systems that can be used off the shelf by non-experts. Further, the demand for ML applications with high predictive performance exceeds the number of machine learning experts and makes the development of AutoML systems necessary. Automated Machine Learning tackles the problem of finding machine learning models with high predictive performance. Existing approaches incorporating deep learning techniques assume that all data is available at the beginning of the training process (offline learning). They configure and optimise a pipeline of preprocessing, feature engineering, and model selection by choosing suitable hyperparameters in each model pipeline step. Furthermore, they assume that the user is fully aware of the choice and, thus, the consequences of the underlying metric (such as precision, recall, or F1-measure). By variation of this metric, the search for suitable configurations and thus the adaptation of algorithms can be tailored to the user’s needs. With the creation of a vast amount of data from all kinds of sources every day, our capability to process and understand these data sets in a single batch is no longer viable. By training machine learning models incrementally (i.ex. online learning), the flood of data can be processed sequentially within data streams. However, if one assumes an online learning scenario, where an AutoML instance executes on evolving data streams, the question of the best model and its configuration remains open.
In this work, we address the adaptation of AutoML in an offline learning scenario toward a certain utility an end-user might pursue as well as the adaptation of AutoML towards evolving data streams in an online learning scenario with three main contributions:
1. We propose a System that allows the adaptation of AutoML and the search for neural architectures towards a particular utility an end-user might pursue.
2. We introduce an online deep learning framework that fosters the research of deep learning models under the online learning assumption and enables the automated search for neural architectures.
3. We introduce an online AutoML framework that allows the incremental adaptation of ML models.
We evaluate the contributions individually, in accordance with predefined requirements and to state-of-the- art evaluation setups. The outcomes lead us to conclude that (i) AutoML, as well as systems for neural architecture search, can be steered towards individual utilities by learning a designated ranking model from pairwise preferences and using the latter as the target function for the offline learning scenario; (ii) architectual small neural networks are in general suitable assuming an online learning scenario; (iii) the configuration of machine learning pipelines can be automatically be adapted to ever-evolving data streams and lead to better performances
Geographic information extraction from texts
A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction
NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS
Historically, communication implies the transfer of information between bodies, yet this
phenomenon is constantly adapting to new technological and cultural standards. In a
digital context, it’s commonplace to envision systems that revolve around verbal modalities.
However, behavioural analysis grounded in psychology research calls attention to
the emotional information disclosed by non-verbal social cues, in particular, actions that
are involuntary. This notion has circulated heavily into various interdisciplinary computing
research fields, from which multiple studies have arisen, correlating non-verbal
activity to socio-affective inferences. These are often derived from some form of motion
capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that
occur from inside the body.
This thesis proposes a motivation and methodology for using physiological sensory
data as an expressive resource for technology-mediated interactions. Initialised from a
thorough discussion on state-of-the-art technologies and established design principles
regarding this topic, then applied to a novel approach alongside a selection of practice
works to compliment this. We advocate for aesthetic experience, experimenting with
abstract representations. Atypically from prevailing Affective Computing systems, the
intention is not to infer or classify emotion but rather to create new opportunities for rich
gestural exchange, unconfined to the verbal domain.
Given the preliminary proposition of non-representation, we justify a correspondence
with modern Machine Learning and multimedia interaction strategies, applying an iterative,
human-centred approach to improve personalisation without the compromising
emotional potential of bodily gesture. Where related studies in the past have successfully
provoked strong design concepts through innovative fabrications, these are typically limited
to simple linear, one-to-one mappings and often neglect multi-user environments;
we foresee a vast potential. In our use cases, we adopt neural network architectures to
generate highly granular biofeedback from low-dimensional input data.
We present the following proof-of-concepts: Breathing Correspondence, a wearable
biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance;
and Anti-Social Distancing Ensemble, an installation for public space interventions,
analysing physical distance to generate a collective soundscape. Key findings are
extracted from the individual reports to formulate an extensive technical and theoretical
framework around this topic. The projects first aim to embrace some alternative perspectives
already established within Affective Computing research. From here, these concepts
evolve deeper, bridging theories from contemporary creative and technical practices with
the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação
entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões
tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram
em torno de modalidades verbais. Contudo, a análise comportamental fundamentada
na investigação psicológica chama a atenção para a informação emocional revelada por
sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou
fortemente em vários campos interdisciplinares de investigação na área das ciências da
computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal
com inferências sócio-afectivas. Estes são frequentemente derivados de alguma
forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas
“invisíveis” que ocorrem no interior do corpo.
Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais
fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia.
Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios
de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem,
juntamente com uma selecção de trabalhos práticos, para complementar esta.
Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente
aos sistemas de Computação Afectiva predominantes, a intenção não é inferir
ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual,
não confinada ao domínio verbal.
Dada a proposta preliminar de não representação, justificamos uma correspondência
com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma
abordagem iterativa e centrada no ser humano para melhorar a personalização sem o
potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores
demonstraram com sucesso conceitos de design fortes através de fabricações
inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um,
e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos
um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes
neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão.
Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema
de biofeedback wearable inspirado nos princípios de design somaestético; Latent
Steps, um modelo autoencoder em tempo real para representar experiências corporais
a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing
Ensemble, uma instalação para intervenções no espaço público, analisando a
distância física para gerar uma paisagem sonora colectiva. Os principais resultados são
extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado
para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas
perspectivas alternativas às que já estão estabelecidas no âmbito da investigação
da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente,
fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o
avanço das tecnologias biomédicas
Modern Socio-Technical Perspectives on Privacy
This open access book provides researchers and professionals with a foundational understanding of online privacy as well as insight into the socio-technical privacy issues that are most pertinent to modern information systems, covering several modern topics (e.g., privacy in social media, IoT) and underexplored areas (e.g., privacy accessibility, privacy for vulnerable populations, cross-cultural privacy). The book is structured in four parts, which follow after an introduction to privacy on both a technical and social level: Privacy Theory and Methods covers a range of theoretical lenses through which one can view the concept of privacy. The chapters in this part relate to modern privacy phenomena, thus emphasizing its relevance to our digital, networked lives. Next, Domains covers a number of areas in which privacy concerns and implications are particularly salient, including among others social media, healthcare, smart cities, wearable IT, and trackers. The Audiences section then highlights audiences that have traditionally been ignored when creating privacy-preserving experiences: people from other (non-Western) cultures, people with accessibility needs, adolescents, and people who are underrepresented in terms of their race, class, gender or sexual identity, religion or some combination. Finally, the chapters in Moving Forward outline approaches to privacy that move beyond one-size-fits-all solutions, explore ethical considerations, and describe the regulatory landscape that governs privacy through laws and policies. Perhaps even more so than the other chapters in this book, these chapters are forward-looking by using current personalized, ethical and legal approaches as a starting point for re-conceptualizations of privacy to serve the modern technological landscape. The book’s primary goal is to inform IT students, researchers, and professionals about both the fundamentals of online privacy and the issues that are most pertinent to modern information systems. Lecturers or teacherscan assign (parts of) the book for a “professional issues” course. IT professionals may select chapters covering domains and audiences relevant to their field of work, as well as the Moving Forward chapters that cover ethical and legal aspects. Academicswho are interested in studying privacy or privacy-related topics will find a broad introduction in both technical and social aspects
RUNTIME AUDIT OF NEURAL SEQUENCE MODELS FOR NLP
Neural network sequence models have become a fundamental building block for natural language processing (NLP) applications. However, with the increasing performance and widespread adoption of these models, the social effects caused by errors in these models' outputs are also amplified. This thesis aims to mitigate such adverse effects by studying different methods that generate user-interpretable auxiliary signals along with model predictions, thus enabling efficient audits of the model output at runtime.
We will look at two different types of auxiliary signals respectively generated for the input and the output of the model. The first type explains which input tokens are important for a certain prediction (Chapter 3 and 4), while the second estimates the quality of each output token (Chapter 5 and 6). For model explanations, our focus is to establish a comprehensive and quantitative evaluation framework, thus enabling a systematic comparison of different model explanation methods on a diverse set of architectures and configurations. For quality estimations, because there is already a solid evaluation framework in place, we instead focus on improving state of the art by introducing an end-task-oriented pre-training step that is based on a non-autoregressive neural machine translation architecture. Overall, we show that it is possible to generate auxiliary signals of high quality with little to no human supervision, and we also provide some guidance for best practices regarding future applications of these methods to NLP, such as conducting comprehensive quantitative evaluations for the auxiliary signals before deployment, and selecting the appropriate evaluation metric that best suits the user's goal
Intelligent Transportation Related Complex Systems and Sensors
Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITS) are being widely adopted worldwide to improve the efficiency and safety of the transportation system. They enable users to be better informed and make safer, more coordinated, and smarter decisions on the use of transport networks. Current ITSs are complex systems, made up of several components/sub-systems characterized by time-dependent interactions among themselves. Some examples of these transportation-related complex systems include: road traffic sensors, autonomous/automated cars, smart cities, smart sensors, virtual sensors, traffic control systems, smart roads, logistics systems, smart mobility systems, and many others that are emerging from niche areas. The efficient operation of these complex systems requires: i) efficient solutions to the issues of sensors/actuators used to capture and control the physical parameters of these systems, as well as the quality of data collected from these systems; ii) tackling complexities using simulations and analytical modelling techniques; and iii) applying optimization techniques to improve the performance of these systems. It includes twenty-four papers, which cover scientific concepts, frameworks, architectures and various other ideas on analytics, trends and applications of transportation-related data
Tailoring Interaction. Sensing Social Signals with Textiles.
Nonverbal behaviour is an important part of conversation and can reveal much about the nature of an interaction. It includes phenomena ranging from large-scale posture shifts to small scale nods. Capturing these often spontaneous phenomena requires unobtrusive sensing techniques that do not interfere with the interaction. We propose an underexploited sensing modality for sensing nonverbal behaviours: textiles. As a material in close contact with the body, they provide ubiquitous, large surfaces that make them a suitable soft interface. Although the literature on nonverbal communication focuses on upper body movements such as gestures, observations of multi-party, seated conversations suggest that sitting postures, leg and foot movements are also systematically related to patterns of social interaction. This thesis addressees the following questions: Can the textiles surrounding us measure social engagement? Can they tell who is speaking, and who, if anyone, is listening? Furthermore, how should wearable textile sensing systems be designed and what behavioural signals could textiles reveal? To address these questions, we have designed and manufactured bespoke chairs and trousers with integrated textile pressure sensors, that are introduced here. The designs are evaluated in three user studies that produce multi-modal datasets for the exploration of fine-grained interactional signals. Two approaches to using these bespoke textile sensors are explored. First, hand crafted sensor patches in chair covers serve to distinguish speakers and listeners. Second, a pressure sensitive matrix in custom-made smart trousers is developed to detect static sitting postures, dynamic bodily movement, as well as basic conversational states. Statistical analyses, machine learning approaches, and ethnographic methods show that by moni- toring patterns of pressure change alone it is possible to not only classify postures with high accuracy, but also to identify a wide range of behaviours reliably in individuals and groups. These findings es- tablish textiles as a novel, wearable sensing system for applications in social sciences, and contribute towards a better understanding of nonverbal communication, especially the significance of posture shifts when seated. If chairs know who is speaking, if our trousers can capture our social engagement, what role can smart textiles have in the future of human interaction? How can we build new ways to map social ecologies and tailor interactions
- …