534 research outputs found

    To Develop the Virtual Physics Laboratory by Integrating Kinect with Gesture Classi-fication Algorithm

    Get PDF
    [[abstract]]Physics is an experimental science, which is through experiments for initiating students into physical concepts and principles. To motivate students in learning physics, in this study, a virtual physics laboratory was developed by using the techniques of Kinect, Unity3D and a gesture classification algorithm. The visual physics experiments were designed in the virtual physics laboratory. The experimental results show that the user can accurately interact with the virtual objects in the virtual physics laboratory, and the developed system provides an interesting way to assist students in learning physics.[[cooperationtype]]國外[[conferencetype]]國際[[iscallforpapers]]Y[[conferencelocation]]Kyoto, Japa

    Intelligent biohazard training based on real-time task recognition

    Get PDF
    Virtual environments offer an ideal setting to develop intelligent training applications. Yet, their ability to support complex procedures depends on the appropriate integration of knowledge-based techniques and natural interaction. In this article, we describe the implementation of an intelligent rehearsal system for biohazard laboratory procedures, based on the real-time instantiation of task models from the trainee’s actions. A virtual biohazard laboratory has been recreated using the Unity3D engine, in which users interact with laboratory objects using keyboard/mouse input or hand gestures through a Kinect device. Realistic behavior for objects is supported by the implementation of a relevant subset of common sense and physics knowledge. User interaction with objects leads to the recognition of specific actions, which are used to progressively instantiate a task-based representation of biohazard procedures. The dynamics of this instantiation process supports trainee evaluation as well as real-time assistance. This system is designed primarily as a rehearsal system providing real-time advice and supporting user performance evaluation. We provide detailed examples illustrating error detection and recovery, and results from on-site testing with students from the Faculty of Medical Sciences at Kyushu University. In the study, we investigate the usability aspect by comparing interaction with mouse and Kinect devices and the effect of real-time task recognition on recovery time after user mistakes

    Augmented interaction for custom-fit products by means of interaction devices at low costs

    Get PDF
    This Ph.D thesis refers to a research project that aims at developing an innovative platform to design lower limb prosthesis (both for below and above knee amputation) centered on the virtual model of the amputee and based on a computer-aided and knowledge-guided approach. The attention has been put on the modeling tool of the socket, which is the most critical component of the whole prosthesis. The main aim has been to redesign and develop a new prosthetic CAD tool, named SMA2 (Socket Modelling Assistant2) exploiting a low-cost IT technologies (e.g. hand/finger tracking devices) and making the user’s interaction as much as possible natural and similar to the hand-made manipulation. The research activities have been carried out in six phases as described in the following. First, limits and criticalities of the already available modeling tool (namely SMA) have been identified. To this end, the first version of SMA has been tested with Ortopedia Panini and the orthopedic research group of Salford University in Manchester with real case studies. Main criticalities were related to: (i) automatic reconstruction of the residuum geometric model starting from medical images, (ii) performance of virtual modeling tools to generate the socket shape, and (iii) interaction mainly based on traditional devices (e.g., mouse and keyboard). The second phase lead to the software reengineering of SMA according to the limits identified in the first phase. The software architecture has been re-designed adopting an object-oriented paradigm and its modularity permits to remove or add new features in a very simple way. The new modeling system, i.e. SMA2, has been totally implemented using open source Software Development Kit-SDK (e.g., Visualization ToolKit VTK, OpenCASCADE and Qt SDK) and based on low cost technology. It includes: • A new module to automatically reconstruct the 3D model of the residual limb from MRI images. In addition, a new procedure based on low-cost technology, such as Microsoft Kinect V2 sensor, has been identified to acquire the 3D external shape of the residuum. • An open source software library, named SimplyNURBS, for NURBS modeling and specifically used for the automatic reconstruction of the residuum 3D model from medical images. Even if, SimplyNURBS has been conceived for the prosthetic domain, it can be used to develop NURBS-based modeling tools for a range of applicative domains from health-care to clothing design. • A module for mesh editing to emulate the hand-made operations carried out by orthopedic technicians during traditional socket manufacturing process. In addition several virtual widgets have been implemented to make available virtual tools similar to the real ones used by the prosthetist, such as tape measure and pencil. • A Natural User Interface (NUI) to allow the interaction with the residuum and socket models using hand-tracking and haptic devices. • A module to generate the geometric models for additive manufacturing of the socket. The third phase concerned the study and design of augmented interaction with particular attention to the Natural User Interface (NUI) for the use of hand-tracking and haptic devices into SMA2. The NUI is based on the use of the Leap Motion device. A set of gestures, mainly iconic and suitable for the considered domain, has been identified taking into account ergonomic issues (e.g., arm posture) and ease of use. The modularity of SMA2 permits us to easily generate the software interface for each device for augmented interaction. To this end, a software module, named Tracking plug-in, has been developed to automatically generate the source code of software interfaces for managing the interaction with low cost hand-tracking devices (e.g., Leap Motion and Intel Gesture Camera) and replicate/emulate manual operations usually performed to design custom-fit products, such medical devices and garments. Regarding haptic rendering, two different devices have been considered, the Falcon Novint, and a haptic mouse developed in-house. In the fourth phase, additive manufacturing technologies have been investigated, in particular FDM one. 3D printing has been exploited in order to permit the creation of trial sockets in laboratory to evaluate the potentiality of SMA2. Furthermore, research activities have been done to study new ways to design the socket. An innovative way to build the socket has been developed based on multi-material 3D printing. Taking advantage of flexible material and multi-material print possibility, new 3D printers permit to create object with soft and hard parts. In this phase, issues about infill, materials and comfort have been faced and solved considering different compositions of materials to re-design the socket shape. In the fifth phase the implemented solution, integrated within the whole prosthesis design platform, has been tested with a transfemoral amputee. Following activities have been performed: • 3D acquisition of the residuum using MRI and commercial 3D scanning systems (low cost and professional). • Creation of the residual limb and socket geometry. • Multi-material 3D printing of the socket using FDM technology. • Gait analysis of the amputee wearing the socket using a markerless motion capture system. • Acquisition of contact pressure between residual limb and a trial socket by means of Teskan’s F-Socket System. Acquired data have been combined inside an ad-hoc developed application, which permits to simultaneously visualize pressure data on the 3D model of the residual lower limb and the animation of gait analysis. Results and feedback have been possible thanks to this application that permits to find correlation between several phases of the gait cycle and the pressure data at the same time. Reached results have been considered very interested and several tests have been planned in order to try the system in orthopedic laboratories in real cases. The reached results have been very useful to evaluate the quality of SMA2 as a future instruments that can be exploited for orthopedic technicians in order to create real socket for patients. The solution has the potentiality to begin a potential commercial product, which will be able to substitute the classic procedure for socket design. The sixth phase concerned the evolution of SMA2 as a Mixed Reality environment, named Virtual Orthopedic LABoratory (VOLAB). The proposed solution is based on low cost devices and open source libraries (e.g., OpenCL and VTK). In particular, the hardware architecture consists of three Microsoft Kinect v2 for human body tracking, the head mounted display Oculus Rift SDK 2 for 3D environment rendering, and the Leap Motion device for hand/fingers tracking. The software development has been based on the modular structure of SMA2 and dedicated modules have been developed to guarantee the communication among the devices. At present, two preliminary tests have been carried out: the first to verify real-time performance of the virtual environment and the second one to verify the augmented interaction with hands using SMA2 modeling tools. Achieved results are very promising but, highlighted some limitations of this first version of VOLAB and improvements are necessary. For example, the quality of the 3D real world reconstruction, especially as far as concern the residual limb, could be improved by using two HD-RGB cameras together the Oculus Rift. To conclude, the obtained results have been evaluated very interested and encouraging from the technical staff of orthopedic laboratory. SMA2 will made possible an important change of the process to design the socket of lower limb prosthesis, from a traditional hand-made manufacturing process to a totally virtual knowledge-guided process. The proposed solutions and results reached so far can be exploited in other industrial sectors where the final product heavily depends on the human body morphology. In fact, preliminary software development has been done to create a virtual environment for clothing design by starting from the basic modules exploited in SMA2

    The development of a human-robot interface for industrial collaborative system

    Get PDF
    Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI

    MIROR-Musical Interaction Relying On Reflexion. Project Final Report

    Get PDF
    open7siIl progetto è stato valutato dalla Commissione Europea con il massimo del punteggio (15/15) con la seguente valutazione sintetica: Excellent. The proposal successfully addresses all relevant aspects of the criterion in question. Any shortcomings are minor". Ha superato le tre valutazioni annuali con giudizio molto positivo (good) e con brillante giudizio finale. Progetto co-finanziato dalla Comunità Europea, 7° Programma Quadro, ICT-Challenge 4.2, Technology enhanced-learning, Programma Cooperation, no 258338. Il Programma “COOPERATION” è identificato dal DM del 1/7/2011 (Identificazione dei programmi di ricerca di alta qualificazione, finanziati dall'Unione europea o dal Ministero dell'istruzione, dell'universita' e della ricerca di cui all'articolo 29, comma 7, della legge n. 240/2010) come uno dei due programmi di ricerca di alta qualificazione finanziati dall’UE. In dettaglio, dal Coordinatore sono stati ricoperti i seguenti ruoli: • Preparazione della proposta e Coordinameto scientifico del Progetto • Responsabile dei contatti con la Commissione Europea • Supervisione e monitoraggio del lavoro svolto dal Consorzio, attraverso workshops, report tecnici e scientifici e coordinamento dei deliverable • Lieder dei seguenti Work-Packages: • WP1. Project Management • WP5. Psychological Experiments • WP8. Dissemination and Exploitation • Coordinatore dell’ALB • Coordinamento scientifico e organizzativo del gruppo di ricerca dell'Università di Bologna, composto da: o 2 assegni di ricerca post-dottorato o 1 assegno di ricerca o 2 contratti di assistenza alla ricerca o 1 contratto per il sito web o 15 studenti-collaboratori o 3 insegnanti collaboratori • Responsabile delle collaborazioni e convenzioni con l’Istituto Comprensivo di Casalecchio di Reno, la Nuova Scuola di Musica Baroncini di Imola, Il Centro Danza Musikè-Bologna. • Supervisione del Project Management (Dipartimento della Ricerca Europea dell'Università di Bologna - ARIC).The MIROR (Musical Interaction Relying On Reflexion) project is co‐funded by the European Commission under the 7th Framework Programme, Theme ICT‐2009.4.2, Technology‐enhanced learning. MIROR is a three‐years project and started on September 1st, 2010. All information regarding MIROR is available through the MIROR Portal at http://www.mirorproject.eu. The MIROR Project-Final Report deals with the description of the development of an adaptive system for music learning and teaching based on the “reflexive interaction” paradigm. The system is developed in the context of early childhood music education. It acts as an advanced cognitive tutor, designed to promote specific cognitive abilities in the field of music improvisation, both in formal learning contexts (kindergartens, primary schools, music schools) and informal ones (at home, kinder centres, etc.). The reflexive interaction paradigm is based on the idea of letting users manipulate virtual copies of themselves, through specifically designed machine‐learning software referred to as “Interactive Reflexive Musical Systems” (IRMS). By definition IRMS are able to learn and configure themselves according to their understanding of the learner's behaviour. In MIROR the IRMS paradigm is extended with the analysis and synthesis of multisensory expressive gesture to increase its impact on the musical pedagogy of young children, by developing new multimodal interfaces. The project is based on a spiral design approach involving coupled interactions between technical and psychopedagogical partners. MIROR integrates both psychological case‐study experiments, aiming to investigate cognitive hypotheses concerning the mirroring behaviour and the learning efficacy of the platform, and validation studies aiming at developing the software in concrete educational settings. The project contributes to promoting the reflexive interaction paradigm not only in the field of music learning, but more generally as a new paradigm for establishing a synergy between learning and cognition in the context of child/machine interaction.openopenA. R. Addessi ; C. Anagnostopoulou; S. Newman; B. Olsson; F. Pachet; G. Volpe; S. YoungA. R. Addessi ; C. Anagnostopoulou; S. Newman; B. Olsson; F. Pachet; G. Volpe; S. Youn
    • …
    corecore