198 research outputs found

    Deep Learning-Based Robotic Perception for Adaptive Facility Disinfection

    Get PDF
    Hospitals, schools, airports, and other environments built for mass gatherings can become hot spots for microbial pathogen colonization, transmission, and exposure, greatly accelerating the spread of infectious diseases across communities, cities, nations, and the world. Outbreaks of infectious diseases impose huge burdens on our society. Mitigating the spread of infectious pathogens within mass-gathering facilities requires routine cleaning and disinfection, which are primarily performed by cleaning staff under current practice. However, manual disinfection is limited in terms of both effectiveness and efficiency, as it is labor-intensive, time-consuming, and health-undermining. While existing studies have developed a variety of robotic systems for disinfecting contaminated surfaces, those systems are not adequate for intelligent, precise, and environmentally adaptive disinfection. They are also difficult to deploy in mass-gathering infrastructure facilities, given the high volume of occupants. Therefore, there is a critical need to develop an adaptive robot system capable of complete and efficient indoor disinfection. The overarching goal of this research is to develop an artificial intelligence (AI)-enabled robotic system that adapts to ambient environments and social contexts for precise and efficient disinfection. This would maintain environmental hygiene and health, reduce unnecessary labor costs for cleaning, and mitigate opportunity costs incurred from infections. To these ends, this dissertation first develops a multi-classifier decision fusion method, which integrates scene graph and visual information, in order to recognize patterns in human activity in infrastructure facilities. Next, a deep-learning-based method is proposed for detecting and classifying indoor objects, and a new mechanism is developed to map detected objects in 3D maps. A novel framework is then developed to detect and segment object affordance and to project them into a 3D semantic map for precise disinfection. Subsequently, a novel deep-learning network, which integrates multi-scale features and multi-level features, and an encoder network are developed to recognize the materials of surfaces requiring disinfection. Finally, a novel computational method is developed to link the recognition of object surface information to robot disinfection actions with optimal disinfection parameters

    Research at the University of Nebraska-Lincoln: 2013-2014 Report

    Get PDF
    The 2013-2014 UNL Research Report, published by the University of Nebraska-Lincoln Office of Research and Economic Development, highlights some of the diverse research, scholarship and creative activity at the heart of UNL’s research enterprise during the fiscal year July 1, 2013, through June 30, 2014. A companion website includes more photos, videos and additional resources

    Exploiting Temporal Image Information in Minimally Invasive Surgery

    Get PDF
    Minimally invasive procedures rely on medical imaging instead of the surgeons direct vision. While preoperative images can be used for surgical planning and navigation, once the surgeon arrives at the target site real-time intraoperative imaging is needed. However, acquiring and interpreting these images can be challenging and much of the rich temporal information present in these images is not visible. The goal of this thesis is to improve image guidance for minimally invasive surgery in two main areas. First, by showing how high-quality ultrasound video can be obtained by integrating an ultrasound transducer directly into delivery devices for beating heart valve surgery. Secondly, by extracting hidden temporal information through video processing methods to help the surgeon localize important anatomical structures. Prototypes of delivery tools, with integrated ultrasound imaging, were developed for both transcatheter aortic valve implantation and mitral valve repair. These tools provided an on-site view that shows the tool-tissue interactions during valve repair. Additionally, augmented reality environments were used to add more anatomical context that aids in navigation and in interpreting the on-site video. Other procedures can be improved by extracting hidden temporal information from the intraoperative video. In ultrasound guided epidural injections, dural pulsation provides a cue in finding a clear trajectory to the epidural space. By processing the video using extended Kalman filtering, subtle pulsations were automatically detected and visualized in real-time. A statistical framework for analyzing periodicity was developed based on dynamic linear modelling. In addition to detecting dural pulsation in lumbar spine ultrasound, this approach was used to image tissue perfusion in natural video and generate ventilation maps from free-breathing magnetic resonance imaging. A second statistical method, based on spectral analysis of pixel intensity values, allowed blood flow to be detected directly from high-frequency B-mode ultrasound video. Finally, pulsatile cues in endoscopic video were enhanced through Eulerian video magnification to help localize critical vasculature. This approach shows particular promise in identifying the basilar artery in endoscopic third ventriculostomy and the prostatic artery in nerve-sparing prostatectomy. A real-time implementation was developed which processed full-resolution stereoscopic video on the da Vinci Surgical System

    Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods

    Get PDF
    Riedenklau E. Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods. Bielefeld: Universität Bielefeld; 2016.Making information understandable and literally graspable is the main goal of tangible interaction research. By giving digital data physical representations (Tangible User Interface Objects, or TUIOs), they can be used and manipulated like everyday objects with the users’ natural manipulation skills. Such physical interaction is basically of uni-directional kind, directed from the user to the system, limiting the possible interaction patterns. In other words, the system has no means to actively support the physical interaction. Within the frame of tabletop tangible user interfaces, this problem was addressed by the introduction of actuated TUIOs, that are controllable by the system. Within the frame of this thesis, we present the development of our own actuated TUIOs and address multiple interaction concepts we identified as research gaps in literature on actuated Tangible User Interfaces (TUIs). Gestural interaction is a natural means for humans to non-verbally communicate using their hands. TUIs should be able to support gestural interaction, since our hands are already heavily involved in the interaction. This has rarely been investigated in literature. For a tangible social network client application, we investigate two methods for collecting user-defined gestures that our system should be able to interpret for triggering actions. Versatile systems often understand a wide palette of commands. Another approach for triggering actions is the use of menus. We explore the design space of menu metaphors used in TUIs and present our own actuated dial-based approach. Rich interaction modalities may support the understandability of the represented data and make the interaction with them more appealing, but also mean high demands on real-time precessing. We highlight new research directions for integrated feature rich and multi-modal interaction, such as graphical display, sound output, tactile feedback, our actuated menu and automatically maintained relations between actuated TUIOs within a remote collaboration application. We also tackle the introduction of further sophisticated measures for the evaluation of TUIs to provide further evidence to the theories on tangible interaction. We tested our enhanced measures within a comparative study. Since one of the key factors in effective manual interaction is speed, we benchmarked both the human hand’s manipulation speed and compare it with the capabilities of our own implementation of actuated TUIOs and the systems described in literature. After briefly discussing applications that lie beyond the scope of this thesis, we conclude with a collection of design guidelines gathered in the course of this work and integrate them together with our findings into a larger frame

    Creepy Technology: What Is It and How Do You Measure It?

    Get PDF
    Interactive technologies are getting closer to our bodies and permeate the infrastructure of our homes. While such technologies offer many benefits, they can also cause an initial feeling of unease in users. It is important for Human-Computer Interaction to manage first impressions and avoid designing technologies that appear creepy. To that end, we developed the Perceived Creepiness of Technology Scale (PCTS), which measures how creepy a technology appears to a user in an initial encounter with a new artefact. The scale was developed based on past work on creepiness and a set of ten focus groups conducted with users from diverse backgrounds. We followed a structured process of analytically developing and validating the scale. The PCTS is designed to enable designers and researchers to quickly compare interactive technologies and ensure that they do not design technologies that produce initial feelings of creepiness in users.Comment: 13 page

    Creating robotic characters for long-term interaction

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 177-181).Researchers studying ways in which humans and robots interact in social settings have a problem: they don't have a robot to use. There is a need for a socially expressive robot that can be deployed outside of a laboratory and support remote operation and data collection. This work aims to fill that need with DragonBot - a platform for social robotics specifically designed for long-term interactions. This thesis is divided into two parts. The first part describes the design and implementation of the hardware, software, and aesthetics of the DragonBot-based characters. Through the use of a mobile phone as the robot's primary computational device, we aim to drive down the hardware cost and increase the availability of robots "in the wild". The second part of this work takes an initial step towards evaluating DragonBot's effectiveness through interactions with children. We describe two different teleoperation interfaces for allowing a human to control DragonBot's behavior differing amounts of autonomy by the robot. A human subject study was conducted and these interfaces were compared through a sticker sharing task between the robot and children aged four to seven. Our results show that when a human operator is able to focus on the social portions of an interaction and the robot is given more autonomy, children treat the character more like a peer. This is indicated by the fact that more children re-engaged the robot with the higher level of autonomy when they were asked to split up stickers between the two participants.by Adam Setapen.S.M

    Information Olfactation: Theory, Design, and Evaluation

    Get PDF
    Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of ‘Information Olfactation’ as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present ‘viScent(1.0)’: a six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. We also conduct a comprehensive perceptual experiment on Information Olfactation: the use of olfactory marks and channels to convey data. More specifically, following the example from graphical perception studies, we design an experiment that studies the perceptual accuracy of four ``olfactory channels''---scent type, scent intensity, airflow, and temperature---for conveying three different types of data---nominal, ordinal, and quantitative. We also present details of an advanced 24-scent olfactory display: ‘viScent(2.0)’ and its software framework that we designed in order to run this experiment. Our results yield a ranking of olfactory channels for each data type that follows similar principles as rankings for visual channels, such as those derived by Mackinlay, Cleveland & McGill, and Bertin

    Hierarchical control of complex manufacturing processes

    Get PDF
    The need for changing the control objective during the process has been reported in many systems in manufacturing, robotics, etc. However, not many works have been devoted to systematically investigating the proper strategies for these types of problems. In this dissertation, two approaches to such problems have been suggested for fast varying systems. The first approach, addresses problems where some of the objectives are statically related to the states of the systems. Hierarchical Optimal Control was proposed to simplify the nonlinearity caused by adding the statically related objectives into control problem. The proposed method was implemented for contour-position control of motion systems as well as force-position control of end milling processes. It was shown for a motion control system, when contour tracking is important, the controller can reduce the contour error even when the axial control signals are saturating. Also, for end milling processes it was shown that during machining sharp edges where, excessive cutting forces can cause tool breakage, by using the proposed controller, force can be bounded without sacrificing the position tracking performance. The second approach that was proposed (Hierarchical Model Predictive Control), addressed the problems where all the objectives are dynamically related. In this method neural network approximation methods were used to convert a nonlinear optimization problem into an explicit form which is feasible for real time implementation. This method was implemented for force-velocity control of ram based freeform extrusion fabrication of ceramics. Excellent extrusion results were achieved with the proposed method showing excellent performance for different changes in control objective during the process --Abstract, page iv

    Exploring the potential of physical visualizations

    Get PDF
    The goal of an external representation of abstract data is to provide insights and convey information about the structure of the underlying data, therefore helping people execute tasks and solve problems more effectively. Apart from the popular and well-studied digital visualization of abstract data there are other scarcely studied perceptual channels to represent data such as taste, sound or haptic. My thesis focuses on the latter and explores in which ways human knowledge and ability to sense and interact with the physical non-digital world can be used to enhance the way in which people analyze and explore abstract data. Emerging technological progress in digital fabrication allow an easy, fast and inexpensive production of physical objects. Machines such as laser cutters and 3D printers enable an accurate fabrication of physical visualizations with different form factors as well as materials. This creates, for the first time, the opportunity to study the potential of physical visualizations in a broad range. The thesis starts with the description of six prototypes of physical visualizations from static examples to digitally augmented variations to interactive artifacts. Based on these explorations, three promising areas of potential for physical visualizations were identified and investigated in more detail: perception & memorability, communication & collaboration, and motivation & self-reflection. The results of two studies in the area of information recall showed that participants who used a physical bar chart retained more information compared to the digital counterpart. Particularly facts about maximum and minimum values were be remembered more efficiently, when they were perceived from a physical visualization. Two explorative studies dealt with the potential of physical visualizations regarding communication and collaboration. The observations revealed the importance on the design and aesthetic of physical visualizations and indicated a great potential for their utilization by audiences with less interest in technology. The results also exposed the current limitations of physical visualizations, especially in contrast to their well-researched digital counterparts. In the area of motivation we present the design and evaluation of the Activity Sculptures project. We conducted a field study, in which we investigated physical visualizations of personal running activity. It was discovered that these sculptures generated curiosity and experimentation regarding the personal running behavior as well as evoked social dynamics such as discussions and competition. Based on the findings of the aforementioned studies this thesis concludes with two theoretical contributions on the design and potential of physical visualizations. On the one hand, it proposes a conceptual framework for material representations of personal data by describing a production and consumption lens. The goal is to encourage artists and designers working in the field of personal informatics to harness the interactive capabilities afforded by digital fabrication and the potential of material representations. On the other hand we give a first classification and performance rating of physical variables including 14 dimensions grouped into four categories. This complements the undertaking of providing researchers and designers with guidance and inspiration to uncover alternative strategies for representing data physically and building effective physical visualizations.Um aus abstrakten Daten konkrete Aussagen, komplexe Zusammenhänge oder überraschende Einsichten gewinnen zu können, müssen diese oftmals in eine, für den Menschen, anschauliche Form gebracht werden. Eine weitverbreitete und gut erforschte Möglichkeiten ist die Darstellung von Daten in visueller Form. Weniger erforschte Varianten sind das Verkörpern von Daten durch Geräusche, Gerüche oder physisch ertastbare Objekte und Formen. Diese Arbeit konzentriert sich auf die letztgenannte Variante und untersucht wie die menschlichen Fähigkeiten mit der physischenWelt zu interagieren dafür genutzt werden können, das Analysieren und Explorieren von Daten zu unterstützen. Der technische Fortschritt in der digitalen Fertigung vereinfacht und beschleunigt die Produktion von physischen Objekten und reduziert dabei deren Kosten. Lasercutter und 3D Drucker ermöglichen beispielsweise eine maßgerechte Fertigung physischer Visualisierungen verschiedenster Ausprägungen hinsichtlich Größe und Material. Dadurch ergibt sich zum ersten Mal die Gelegenheit, das Potenzial von physischen Visualisierungen in größerem Umfang zu erforschen. Der erste Teil der Arbeit skizziert insgesamt sechs Prototypen physischer Visualisierungen, wobei sowohl statische Beispiele beschrieben werden, als auch Exemplare die durch digital Inhalte erweitert werden oder dynamisch auf Interaktionen reagieren können. Basierend auf den Untersuchungen dieser Prototypen wurden drei vielversprechende Bereiche für das Potenzial physischer Visualisierungen ermittelt und genauer untersucht: Wahrnehmung & Einprägsamkeit, Kommunikation & Zusammenarbeit sowie Motivation & Selbstreflexion. Die Ergebnisse zweier Studien zur Wahrnehmung und Einprägsamkeit von Informationen zeigten, dass sich Teilnehmer mit einem physischen Balkendiagramm an deutlich mehr Informationen erinnern konnten, als Teilnehmer, die eine digitale Visualisierung nutzten. Insbesondere Fakten über Maximal- und Minimalwerte konnten besser im Gedächtnis behalten werden, wenn diese mit Hilfe einer physischen Visualisierung wahrgenommen wurden. Zwei explorative Studien untersuchten das Potenzial von physischen Visualisierungen im Bereich der Kommunikation mit Informationen sowie der Zusammenarbeit. Die Ergebnisse legten einerseits offen wie wichtig ein ausgereiftes Design und die Ästhetik von physischen Visualisierungen ist, deuteten anderseits aber auch darauf hin, dass Menschen mit geringem Interesse an neuen Technologien eine interessante Zielgruppe darstellen. Die Studien offenbarten allerdings auch die derzeitigen Grenzen von physischen Visualisierungen, insbesondere im Vergleich zu ihren gut erforschten digitalen Pendants. Im Bereich der Motivation und Selbstreflexion präsentieren wir die Entwicklung und Auswertung des Projekts Activity Sculptures. In einer Feldstudie über drei Wochen erforschten wir physische Visualisierungen, die persönliche Laufdaten repräsentieren. Unsere Beobachtungen und die Aussagen der Teilnehmer ließen darauf schließen, dass die Skulpturen Neugierde weckten und zum Experimentieren mit dem eigenen Laufverhalten einluden. Zudem konnten soziale Dynamiken entdeckt werden, die beispielsweise durch Diskussion aber auch Wettbewerbsgedanken zum Ausdruck kamen. Basierend auf den gewonnen Erkenntnissen durch die erwähnten Studien schließt diese Arbeit mit zwei theoretischen Beiträgen, hinsichtlich des Designs und des Potenzials von physischen Visualisierungen, ab. Zuerst wird ein konzeptionelles Framework vorgestellt, welches die Möglichkeiten und den Nutzen physischer Visualisierungen von persönlichen Daten veranschaulicht. Für Designer und Künstler kann dies zudem als Inspirationsquelle dienen, wie das Potenzial neuer Technologien, wie der digitalen Fabrikation, zur Darstellung persönlicher Daten in physischer Form genutzt werden kann. Des Weiteren wird eine initiale Klassifizierung von physischen Variablen vorgeschlagen mit insgesamt 14 Dimensionen, welche in vier Kategorien gruppiert sind. Damit vervollständigen wir unser Ziel, Forschern und Designern Inspiration und Orientierung zu bieten, um neuartige und effektvolle physische Visualisierungen zu erschaffen
    • …
    corecore