53 research outputs found
Electric Wheelchair Hybrid Operating System Coordinated with Working Range of a Robotic Arm
Electric wheelchair-mounted robotic arms can help patients with disabilities to perform their activities in daily living (ADL). Joysticks or keypads are commonly used as the operating interface of Wheelchair-mounted robotic arms. Under different scenarios, some patients with upper limb disabilities such as finger contracture cannot operate such interfaces smoothly. Recently, manual interfaces for different symptoms to operate the wheelchair-mounted robotic arms are being developed. However, the stop the wheelchairs in an appropriate position for the robotic arm grasping task is still not easy. To reduce the individualâs burden in operating wheelchair in narrow spaces and to ensure that the chair always stops within the working range of a robotic arm, we propose here an operating system for an electric wheelchair that can automatically drive itself to within the working range of a robotic arm by capturing the position of an AR marker via a chair-mounted camera. Meanwhile, the system includes an error correction model to correct the wheelchairâs moving error. Finally, we demonstrate the effectiveness of the proposed system by running the wheelchair and simulating the robotic arm through several courses
Explainable shared control in assistive robotics
Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency.
There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference.
Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent.
This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces
Context-aware gestural interaction in the smart environments of the ubiquitous computing era
A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces.
This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability.
In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores.
Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy.
The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms
Playful Materialities
Game culture and material culture have always been closely linked. Analog forms of rule-based play (ludus) would hardly be conceivable without dice, cards, and game boards. In the act of free play (paidia), children as well as adults transform simple objects into multifaceted toys in an almost magical way. Even digital play is suffused with material culture: Games are not only mediated by technical interfaces, which we access via hardware and tangible peripherals. They are also subject to material hybridization, paratextual framing, and processes of de-, and re-materialization
Playful Materialities: The Stuff That Games Are Made Of
Game culture and material culture have always been closely linked. Analog forms of rule-based play (ludus) would hardly be conceivable without dice, cards, and game boards. In the act of free play (paidia), children as well as adults transform simple objects into multifaceted toys in an almost magical way. Even digital play is suffused with material culture: Games are not only mediated by technical interfaces, which we access via hardware and tangible peripherals. They are also subject to material hybridization, paratextual framing, and processes of de-, and re-materialization
Playful Materialities
Game culture and material culture have always been closely linked. Analog forms of rule-based play (ludus) would hardly be conceivable without dice, cards, and game boards. In the act of free play (paidia), children as well as adults transform simple objects into multifaceted toys in an almost magical way. Even digital play is suffused with material culture: Games are not only mediated by technical interfaces, which we access via hardware and tangible peripherals. They are also subject to material hybridization, paratextual framing, and processes of de-, and re-materialization
Haptic Media Scenes
The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction designâas haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messagesâyet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity
Design revolutions: IASDR 2019 Conference Proceedings. Volume 3: People
In September 2019 Manchester School of Art at Manchester Metropolitan University was honoured to host the bi-annual conference of the International Association of Societies of Design Research (IASDR) under the unifying theme of DESIGN REVOLUTIONS. This was the first time the conference had been held in the UK. Through key research themes across nine conference tracks â Change, Learning, Living, Making, People, Technology, Thinking, Value and Voices â the conference opened up compelling, meaningful and radical dialogue of the role of design in addressing societal and organisational challenges. This Volume 3 includes papers from People track of the conference
Digisprudence: the affordance of legitimacy in code-as-law
This multidisciplinary thesis is located at the intersection of legal theory and design. It
synthesises the practical question of how code regulates (using theories including James
Gibsonâs/Donald Normanâs affordance, Don Ihdeâs postphenomenology, and Madeleine
Akrichâs inscription) with a legal-theoretical view of what constitutes legitimate regulation
(using theories including Lon Fullerâs internal morality of law, Luc Wintgensâ legisprudence,
and Mireille Hildebrandtâs legal protection by design).
Proceeding from the notion that code is an a-legal normative order, I argue that even
(and indeed especially) in the absence of suitable or sufficient legal regulation, the norms of
that order ought to be legitimated. This is particularly true given the unique characteristics of
code as a regulator, which include its ruleishness, opacity, immediacy, immutability,
pervasiveness, and, perhaps most importantly, its production by private enterprise. Having set
out how code regulates from the perspective of the design theories mentioned above, I explore
these characteristics from a legal-theoretical perspective, developing the concept of
computational legalism, a uniquely strong form of the undesirable ideological phenomenon of
legalism. This is the first significant contribution of the thesis.
Having set up the parallel between legal and technological normativity, I explore the
extent to which ex ante mechanisms for ameliorating legalism in the creation of legal norms
can be translated into the âlegislatureâ of the design environment, to be applied in the creation
of code-based norms. The motivating idea is that the standards that make legal norms
legitimate ought mutatis mutandis to be applicable to other normative orders that enable and
constrain individual behaviour. The literature has so far tended to focus on ex post assessments
of codeâs operation, and to that extent it fails to account for computational legalism and the
standards that must be met â by definition during the production process, ex ante â in order
to mitigate or avoid it.
Taking all of this into account, the second significant contribution of the thesis is a
âconstitutionalâ framework of digisprudential affordances that I argue ought to be present in all
user-facing code, in order to ensure that certain foundational capabilities are provided by the
design. The affordances I identify are: contestability; transparency of provenance, purpose, and
operation; choice; oversight; and delay. They act as a formal mechanism for constraining what
substantive code can possibly do, imposing âthinâ constitutional design standards that ought to
be met regardless of the âthickâ purposes or functionality of the digital artefact. I discuss how
these might be implemented in practice through an analysis of two contemporary technologies,
the Internet of Things and blockchain applications. This practical element of the thesis
connects with the last significant novel contribution, namely an exploration of Cornelia
Vismann and Markus Krajewskiâs concept of the programmer of the programmer, and how this
âconstitutional actorâ can be used to impose digisprudential limits â analogous to HLA Hartâs
secondary rules â within the code development process
- âŠ