124 research outputs found

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    E-Learning

    Get PDF
    Technology development, mainly for telecommunications and computer systems, was a key factor for the interactivity and, thus, for the expansion of e-learning. This book is divided into two parts, presenting some proposals to deal with e-learning challenges, opening up a way of learning about and discussing new methodologies to increase the interaction level of classes and implementing technical tools for helping students to make better use of e-learning resources. In the first part, the reader may find chapters mentioning the required infrastructure for e-learning models and processes, organizational practices, suggestions, implementation of methods for assessing results, and case studies focused on pedagogical aspects that can be applied generically in different environments. The second part is related to tools that can be adopted by users such as graphical tools for engineering, mobile phone networks, and techniques to build robots, among others. Moreover, part two includes some chapters dedicated specifically to e-learning areas like engineering and architecture

    User-based gesture vocabulary for form creation during a product design process

    Get PDF
    There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only.There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    Tactile and Touchless Sensors Printed on Flexible Textile Substrates for Gesture Recognition

    Full text link
    Tesis por compendio[EN] The main objective of this thesis is the development of new sensors and actuators using Printed Electronics technology. For this, conductive, semiconductor and dielectric polymeric materials are used on flexible and/or elastic substrates. By means of suitable designs and application processes, it is possible to manufacture sensors capable of interacting with the environment. In this way, specific sensing functionalities can be incorporated into the substrates, such as textile fabrics. Additionally, it is necessary to include electronic systems capable of processing the data obtained, as well as its registration. In the development of these sensors and actuators, the physical properties of the different materials are precisely combined. For this, multilayer structures are designed where the properties of some materials interact with those of others. The result is a sensor capable of capturing physical variations of the environment, and convert them into signals that can be processed, and finally transformed into data. On the one hand, a tactile sensor printed on textile substrate for 2D gesture recognition was developed. This sensor consists of a matrix composed of small capacitive sensors based on a capacitor type structure. These sensors were designed in such a way that, if a finger or other object with capacitive properties, gets close enough, its behaviour varies, and it can be measured. The small sensors are arranged in this matrix as in a grid. Each sensor has a position that is determined by a row and a column. The capacity of each small sensor is periodically measured in order to assess whether significant variations have been produced. For this, it is necessary to convert the sensor capacity into a value that is subsequently digitally processed. On the other hand, to improve the effectiveness in the use of the developed 2D touch sensors, the way of incorporating an actuator system was studied. Thereby, the user receives feedback that the order or action was recognized. To achieve this, the capacitive sensor grid was complemented with an electroluminescent screen printed as well. The final prototype offers a solution that combines a 2D tactile sensor with an electroluminescent actuator on a printed textile substrate. Next, the development of a 3D gesture sensor was carried out using a combination of sensors also printed on textile substrate. In this type of 3D sensor, a signal is sent generating an electric field on the sensors. This is done using a transmission electrode located very close to them. The generated field is received by the reception sensors and converted to electrical signals. For this, the sensors are based on electrodes that act as receivers. If a person places their hands within the emission area, a disturbance of the electric field lines is created. This is due to the deviation of the lines to ground using the intrinsic conductivity of the human body. This disturbance affects the signals received by the electrodes. Variations captured by all electrodes are processed together and can determine the position and movement of the hand on the sensor surface. Finally, the development of an improved 3D gesture sensor was carried out. As in the previous development, the sensor allows contactless gesture detection, but increasing the detection range. In addition to printed electronic technology, two other textile manufacturing technologies were evaluated.[ES] La presente tesis doctoral tiene como objetivo fundamental el desarrollo de nuevos sensores y actuadores empleando la tecnología electrónica impresa, también conocida como Printed Electronics. Para ello, se emplean materiales poliméricos conductores, semiconductores y dieléctricos sobre sustratos flexibles y/o elásticos. Por medio de diseños y procesos de aplicación adecuados, es posible fabricar sensores capaces de interactuar con el entorno. De este modo, se pueden incorporar a los sustratos, como puedan ser tejidos textiles, funcionalidades específicas de medición del entorno y de respuesta ante cambios de este. Adicionalmente, es necesario incluir sistemas electrónicos, capaces de realizar el procesado de los datos obtenidos, así como de su registro. En el desarrollo de estos sensores y actuadores se combinan las propiedades físicas de los diferentes materiales de forma precisa. Para ello, se diseñan estructuras multicapa donde las propiedades de unos materiales interaccionan con las de los demás. El resultado es un sensor capaz de captar variaciones físicas del entorno, y convertirlas en señales que pueden ser procesadas y transformadas finalmente en datos. Por una parte, se ha desarrollado un sensor táctil impreso sobre sustrato textil para reconocimiento de gestos en 2D. Este sensor se compone de una matriz formada por pequeños sensores capacitivos basados en estructura de tipo condensador. Estos se han diseñado de forma que, si un dedo u otro objeto con propiedades capacitivas se aproxima suficientemente, su comportamiento varía, pudiendo ser medido. Los pequeños sensores están ordenados en dicha matriz como en una cuadrícula. Cada sensor tiene una posición que viene determinada por una fila y por una columna. Periódicamente se mide la capacidad de cada pequeño sensor con el fin de evaluar si ha sufrido variaciones significativas. Para ello es necesario convertir la capacidad del sensor en un valor que posteriormente es procesado digitalmente. Por otro lado, con el fin de mejorar la efectividad en el uso de los sensores táctiles 2D desarrollados, se ha estudiado el modo de incorporar un sistema actuador. De esta forma, el usuario recibe una retroalimentación indicando que la orden o acción ha sido reconocida. Para ello, se ha complementado la matriz de sensores capacitivos con una pantalla electroluminiscente también impresa. El resultado final ofrece una solución que combina un sensor táctil 2D con un actuador electroluminiscente realizado mediante impresión electrónica sobre sustrato textil. Posteriormente, se ha llevado a cabo el desarrollo de un sensor de gestos 3D empleando una combinación de sensores impresos también sobre sustrato textil. En este tipo de sensor 3D, se envía una señal que genera un campo eléctrico sobre los sensores impresos. Esto se lleva a cabo mediante un electrodo de transmisión situado muy cerca de ellos. El campo generado es recibido por los sensores y convertido a señales eléctricas. Para ello, los sensores se basan en electrodos que actúan de receptores. Si una persona coloca su mano dentro del área de emisión, se crea una perturbación de las líneas de los campos eléctricos. Esto es debido a la desviación de las líneas de campo a tierra utilizando la conductividad intrínseca del cuerpo humano. Esta perturbación cambia/afecta a las señales recibidas por los electrodos. Las variaciones captadas por todos los electrodos son procesadas de forma conjunta pudiendo determinar la posición y el movimiento de la mano sobre la superficie del sensor. Finalmente, se ha llevado a cabo el desarrollo de un sensor de gestos 3D mejorado. Al igual que el desarrollo anterior, permite la detección de gestos sin necesidad de contacto, pero incrementando la distancia de alcance. Además de la tecnología de impresión electrónica, se ha evaluado el empleo de otras dos tecnologías de fabricación textil.[CA] La present tesi doctoral té com a objectiu fonamental el desenvolupament de nous sensors i actuadors fent servir la tecnologia de electrònica impresa, també coneguda com Printed Electronics. Es va fer us de materials polimèrics conductors, semiconductors i dielèctrics sobre substrats flexibles i/o elàstics. Per mitjà de dissenys i processos d'aplicació adequats, és possible fabricar sensors capaços d'interactuar amb l'entorn. D'aquesta manera, es poden incorporar als substrats, com ara teixits tèxtils, funcionalitats específiques de mesurament de l'entorn i de resposta davant canvis d'aquest. Addicionalment, és necessari incloure sistemes electrònics, capaços de realitzar el processament de les dades obtingudes, així com del seu registre. En el desenvolupament d'aquests sensors i actuadors es combinen les propietats físiques dels diferents materials de forma precisa. Cal dissenyar estructures multicapa on les propietats d'uns materials interaccionen amb les de la resta. manera El resultat es un sensor capaç de captar variacions físiques de l'entorn, i convertirles en senyals que poden ser processades i convertides en dades. D'una banda, s'ha desenvolupat un sensor tàctil imprès sobre substrat tèxtil per a reconeixement de gestos en 2D. Aquest sensor es compon d'una matriu formada amb petits sensors capacitius basats en una estructura de tipus condensador. Aquests s'han dissenyat de manera que, si un dit o un altre objecte amb propietats capacitives s'aproxima prou, el seu comportament varia, podent ser mesurat. Els petits sensors estan ordenats en aquesta matriu com en una quadrícula. Cada sensor té una posició que ve determinada per una fila i per una columna. Periòdicament es mesura la capacitat de cada petit sensor per tal d'avaluar si ha sofert variacions significatives. Per a això cal convertir la capacitat del sensor a un valor que posteriorment és processat digitalment. D'altra banda, per tal de millorar l'efectivitat en l'ús dels sensors tàctils 2D desenvolupats, s'ha estudiat la manera d'incorporar un sistema actuador. D'aquesta forma, l'usuari rep una retroalimentació indicant que l'ordre o acció ha estat reconeguda. Per a això, s'ha complementat la matriu de sensors capacitius amb una pantalla electroluminescent també impresa. El resultat final ofereix una solució que combina un sensor tàctil 2D amb un actuador electroluminescent realitzat mitjançant impressió electrònica sobre substrat tèxtil. Posteriorment, s'ha dut a terme el desenvolupament d'un sensor de gestos 3D emprant una combinació d'un mínim de sensors impresos també sobre substrat tèxtil. En aquest tipus de sensor 3D, s'envia un senyal que genera un camp elèctric sobre els sensors impresos. Això es porta a terme mitjançant un elèctrode de transmissió situat molt a proper a ells. El camp generat és rebut pels sensors i convertit a senyals elèctrics. Per això, els sensors es basen en elèctrodes que actuen de receptors. Si una persona col·loca la seva mà dins de l'àrea d'emissió, es crea una pertorbació de les línies dels camps elèctrics. Això és a causa de la desviació de les línies de camp a terra utilitzant la conductivitat intrínseca de el cos humà. Aquesta pertorbació afecta als senyals rebudes pels elèctrodes. Les variacions captades per tots els elèctrodes són processades de manera conjunta per determinar la posició i el moviment de la mà sobre la superfície del sensor. Finalment, s'ha dut a terme el desenvolupament d'un sensor de gestos 3D millorat. A l'igual que el desenvolupament anterior, permet la detecció de gestos sense necessitat de contacte, però incrementant la distància d'abast. A més a més de la tecnologia d'impressió electrònica, s'ha avaluat emprar altres dues tecnologies de fabricació tèxtil.Ferri Pascual, J. (2020). Tactile and Touchless Sensors Printed on Flexible Textile Substrates for Gesture Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/153075TESISCompendi
    corecore