5 research outputs found
Face and Body gesture recognition for a vision-based multimodal analyser
users, computers should be able to recognize emotions, by analyzing the human's affective state, physiology and behavior. In this paper, we present a survey of research conducted on face and body gesture and recognition. In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way. Accordingly in this paper, we present a framework for a vision-based multimodal analyzer that combines face and body gesture and further discuss relevant issues
Leap Motion Sensor for Natural User Interface
In Human Computer Interaction (HCI) research area, there is an increasing tendency to make devices as simple and as natural as possible for use. These devices are aiming to make input and output techniques, interaction, etc., easier. In the input domain, sensors monitor and interpret head, eye, face and even whole body movements. Interacting with the computer, hands are the most effective tool of general purpose, due to their functionality in communication and manipulation. Using hands as an input device is an attractive method for ensuring interaction between man and computer. This paper gives an overview of the Leap Motion devices as a technology that enables natural interaction between man and computer in NUI (natural user interface) implementation. The main idea to maximize the naturalness of the user environment and the use of hand movements is in the application of the sensor fusion of two Leap Motion devices. Applying sensor fusion of two Leap Motion devices will increase the range of hand movement and the interaction within the environment, which will again contribute to the natural user interface (NUI). The suggested method can be used offline or in real time, and can benefit from a wide range of applications, where the gesture of the hand and fingers is the focus of significance and estimation effect
3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities
Moving from a set of independent virtual worlds to an integrated network of 3D virtual worlds or Metaverse rests on progress in four areas: immersive realism, ubiquity of access and identity, interoperability, and scalability. For each area, the current status and needed developments in order to achieve a functional Metaverse are described. Factors that support the formation of a viable Metaverse, such as institutional and popular interest and ongoing improvements in hardware performance, and factors that constrain the achievement of this goal, including limits in computational methods and unrealized collaboration among virtual world stakeholders and developers, are also considered