2,494 research outputs found
Human movement modifications induced by different levels of transparency of an active upper limb exoskeleton
Active upper limb exoskeletons are a potentially powerful tool for neuromotor rehabilitation. This potential depends on several basic control modes, one of them being transparency. In this control mode, the exoskeleton must follow the human movement without altering it, which theoretically implies null interaction efforts. Reaching high, albeit imperfect, levels of transparency requires both an adequate control method and an in-depth evaluation of the impacts of the exoskeleton on human movement. The present paper introduces such an evaluation for three different “transparent” controllers either based on an identification of the dynamics of the exoskeleton, or on force feedback control or on their combination. Therefore, these controllers are likely to induce clearly different levels of transparency by design. The conducted investigations could allow to better understand how humans adapt to transparent controllers, which are necessarily imperfect. A group of fourteen participants were subjected to these three controllers while performing reaching movements in a parasagittal plane. The subsequent analyses were conducted in terms of interaction efforts, kinematics, electromyographic signals and ergonomic feedback questionnaires. Results showed that, when subjected to less performing transparent controllers, participants strategies tended to induce relatively high interaction efforts, with higher muscle activity, which resulted in a small sensitivity of kinematic metrics. In other words, very different residual interaction efforts do not necessarily induce very different movement kinematics. Such a behavior could be explained by a natural human tendency to expend effort to preserve their preferred kinematics, which should be taken into account in future transparent controllers evaluation
Peering into the Dark: Investigating dark matter and neutrinos with cosmology and astrophysics
The LCDM model of modern cosmology provides a highly accurate description of our universe.
However, it relies on two mysterious components, dark matter and dark energy. The cold dark matter
paradigm does not provide a satisfying description of its particle nature, nor any link to the Standard
Model of particle physics.
I investigate the consequences for cosmological structure formation in models with a coupling
between dark matter and Standard Model neutrinos, as well as probes of primordial black holes as
dark matter.
I examine the impact that such an interaction would have through both linear perturbation theory and
nonlinear N-body simulations. I present limits on the possible interaction strength from cosmic
microwave background, large scale structure, and galaxy population data, as well as forecasts on the
future sensitivity. I provide an analysis of what is necessary to distinguish the cosmological impact of
interacting dark matter from similar effects. Intensity mapping of the 21 cm line of neutral hydrogen at
high redshift using next generation observatories, such as the SKA, would provide the strongest
constraints yet on such interactions, and may be able to distinguish between different scenarios
causing suppressed small scale structure. I also present a novel type of probe of structure formation,
using the cosmological gravitational wave signal of high redshift compact binary mergers to provide
information about structure formation, and thus the behaviour of dark matter. Such observations
would also provide competitive constraints.
Finally, I investigate primordial black holes as an alternative dark matter candidate, presenting an
analysis and framework for the evolution of extended mass populations over cosmological time and
computing the present day gamma ray signal, as well as the allowed local evaporation rate. This is
used to set constraints on the allowed population of low mass primordial black holes, and the
likelihood of witnessing an evaporation
Life in the Fells: names in a nineteenth-century Cumberland landscape
This thesis examines the field-names of Crosthwaite parish, Cumberland. A survey of the fieldnames and a corresponding glossary of elements and their localised usage(s) within the study area, some previously unattested, form a significant part of the thesis.
The field-name data is compiled chiefly from nineteenth-century Tithe Awards which records the names and descriptions of Crosthwaite’s 8,626 land units, 3,351 of which are field-names (3.4.1). These 3,351 field-names, recorded in the survey (Chapter Four), contain 6,052 elements which fall into 586 element types, presented in the glossary (Chapter Five).
The work of this thesis is underpinned by the data from two key resources which were created as part of this research: a) a field-name dataset composed of all linguistic data held within the Tithe Awards for the parish (3.1); and b) an interactive digital map of all 8,626 land units, into which the field-name data is embedded (3.3). The first resource – the onomastic data – allows for the fieldnames to be analysed linguistically. The second – the cartographical data – allows for the fieldnames to be analysed spatially, enabling the evidence of the landscape to inform the interpretation and analysis of the names.
A quantitative analysis of all Crosthwaite’s field-name elements (Chapter Six) highlights the close relationship between the language of the field-names and the landscape they describe. The extent to which the field-names reflect their landscape is marked and is observable both in the use of individual elements, and in the language use of townships within the parish more broadly.
The survey (Chapter Four) and glossary (Chapter Five) constitute a substantial contribution to the available field-name data for Cumberland, and for England more generally, supplementing the English Place-Name Society survey for Cumberland.
Other key findings from this research (Chapter Seven) include the discovery of metaphorical elements unattested elsewhere, as well as other elements or element usages particular to the study area. Field-names which provide evidence for lost place-names, and instances of toponomastic overlap between England and Scotland, are observable within the data of this thesis; a lack of genitival -s in personal names within field-names is likewise notable. This thesis advocates for the development and implementation of a new field-name terminology model
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
Data-Driven Evaluation of In-Vehicle Information Systems
Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens.
In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs.
In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics.
In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior.
Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions
Kinematic markers of skill in first-person shooter video games
Video games present a unique opportunity to study motor skill. First-person shooter (FPS) games have particular utility because they require visually guided hand movements that are similar to widely studied planar reaching tasks. However, there is a need to ensure the tasks are equivalent if FPS games are to yield their potential as a powerful scientific tool for investigating sensorimotor control. Specifically, research is needed to ensure that differences in visual feedback of a movement do not affect motor learning between the two contexts. In traditional tasks, a movement will translate a cursor across a static background, whereas FPS games use movements to pan and tilt the view of the environment. To this end, we designed an online experiment where participants used their mouse or trackpad to shoot targets in both visual contexts. Kinematic analysis showed player movements were nearly identical between contexts, with highly correlated spatial and temporal metrics. This similarity suggests a shared internal model based on comparing predicted and observed displacement vectors rather than primary sensory feedback. A second experiment, modeled on FPS-style aim-trainer games, found movements exhibited classic invariant features described within the sensorimotor literature. We found the spatial metrics tested were significant predictors of overall task performance. More broadly, these results show that FPS games offer a novel, engaging, and compelling environment to study sensorimotor skill, providing the same precise kinematic metrics as traditional planar reaching tasks
Facilitating Extended Reality in Museums through a Web-Based Application
Masteroppgave i Programvareutvikling samarbeid med HVLPROG399MAMN-PRO
Augmenting Pathologists with NaviPath: Design and Evaluation of a Human-AI Collaborative Navigation System
Artificial Intelligence (AI) brings advancements to support pathologists in
navigating high-resolution tumor images to search for pathology patterns of
interest. However, existing AI-assisted tools have not realized this promised
potential due to a lack of insight into pathology and HCI considerations for
pathologists' navigation workflows in practice. We first conducted a formative
study with six medical professionals in pathology to capture their navigation
strategies. By incorporating our observations along with the pathologists'
domain knowledge, we designed NaviPath -- a human-AI collaborative navigation
system. An evaluation study with 15 medical professionals in pathology
indicated that: (i) compared to the manual navigation, participants saw more
than twice the number of pathological patterns in unit time with NaviPath, and
(ii) participants achieved higher precision and recall against the AI and the
manual navigation on average. Further qualitative analysis revealed that
navigation was more consistent with NaviPath, which can improve the overall
examination quality.Comment: Accepted ACM CHI Conference on Human Factors in Computing Systems
(CHI '23
- …