2 research outputs found

    Review of human-machine interaction towards industry 5.0: human-centric smart manufacturing

    Get PDF
    Human-centric smart manufacturing (HCSM) is one of the essential pillars in Industry 5.0. Hence, human-machine interaction (HMI), as the centre of the research agenda for the advances of smart manufacturing, has also become the focus of Industry 5.0. As Industry 5.0 proposed three core concepts of human-centric, sustainable and resilient, the design orientation of HMI needs to change accordingly. Through understanding the state-of-the-art of HMI research, the technology roadmap of HMI development in the smart manufacturing paradigm can be shaped. In this paper, the focus is to review how HMI has been applied in smart manufacturing and predict future opportunities and challenges when applying HMI to HCSM. In this paper, we provide an HMI framework based on the interaction process and analyse the existing research on HMI across four key aspects: 1) Sensor and Hardware, 2) Data Processing, 3) Transmission Mechanism, and 4) Interaction and Collaboration. We intend to analyse the current development and technologies of each aspect and their possible application in HCSM. Finally, potential challenges and opportunities in future research and applications of HMI are discussed and evaluated, especially considering that the focus of design in HCSM shifts from improving productivity to the well-being of workers and sustainability

    Enhanced Multi-Touch Gestures for Complex Tasks

    Get PDF
    Recent technological advances have resulted in a major shift, from high-performance notebook and desktop computers -- devices that rely on keyboard and mouse for input -- towards smaller, personal devices like smartphones, tablets and smartwatches which rely primarily on touch input. Users of these devices typically have a relatively high level of skill in using multi-touch gestures to interact with them, but the multi-touch gesture sets that are supported are often restricted to a small subset of one and two-finger gestures, such as tap, double tap, drag, flick, pinch and spread. This is not due to technical limitations, since modern multi-touch smartphones and tablets are capable of accepting at least ten simultaneous points of contact. Likewise, human movement models suggest that humans are capable of richer and more expressive forms of interaction that utilize multiple fingers. This suggests a gap between the technical capabilities of multi-touch devices, the physical capabilities of end-users, and the gesture sets that have been implemented for these devices. Our work explores ways in which we can enrich multi-touch interaction on these devices by expanding these common gesture sets. Simple gestures are fine for simple use cases, but if we want to support a wide range of sophisticated behaviours -- the types of interactions required by expert users -- we need equally sophisticated capabilities from our devices. In this thesis, we refer to these more sophisticated, complex interactions as `enhanced gestures' to distinguish them from common but simple gestures, and to suggest the types of expert scenarios that we are targeting in their design. We do not need to necessarily replace current, familiar gestures, but it makes sense to consider augmenting them as multi-touch becomes more prevalent, and is applied to more sophisticated problems. This research explores issues of approachability and user acceptance around gesture sets. Using pinch-to-zoom as an example, we establish design guidelines for enhanced gestures, and systematically design, implement and evaluate two different types of expert gestures, illustrative of the type of functionality that we might build into future systems
    corecore