37,785 research outputs found

    Nanoscale integration of single cell biologics discovery processes using optofluidic manipulation and monitoring.

    Get PDF
    The new and rapid advancement in the complexity of biologics drug discovery has been driven by a deeper understanding of biological systems combined with innovative new therapeutic modalities, paving the way to breakthrough therapies for previously intractable diseases. These exciting times in biomedical innovation require the development of novel technologies to facilitate the sophisticated, multifaceted, high-paced workflows necessary to support modern large molecule drug discovery. A high-level aspiration is a true integration of "lab-on-a-chip" methods that vastly miniaturize cellulmical experiments could transform the speed, cost, and success of multiple workstreams in biologics development. Several microscale bioprocess technologies have been established that incrementally address these needs, yet each is inflexibly designed for a very specific process thus limiting an integrated holistic application. A more fully integrated nanoscale approach that incorporates manipulation, culture, analytics, and traceable digital record keeping of thousands of single cells in a relevant nanoenvironment would be a transformative technology capable of keeping pace with today's rapid and complex drug discovery demands. The recent advent of optical manipulation of cells using light-induced electrokinetics with micro- and nanoscale cell culture is poised to revolutionize both fundamental and applied biological research. In this review, we summarize the current state of the art for optical manipulation techniques and discuss emerging biological applications of this technology. In particular, we focus on promising prospects for drug discovery workflows, including antibody discovery, bioassay development, antibody engineering, and cell line development, which are enabled by the automation and industrialization of an integrated optoelectronic single-cell manipulation and culture platform. Continued development of such platforms will be well positioned to overcome many of the challenges currently associated with fragmented, low-throughput bioprocess workflows in biopharma and life science research

    EagleSense:tracking people and devices in interactive spaces using real-time top-view depth-sensing

    Get PDF
    Real-time tracking of people's location, orientation and activities is increasingly important for designing novel ubiquitous computing applications. Top-view camera-based tracking avoids occlusion when tracking people while collaborating, but often requires complex tracking systems and advanced computer vision algorithms. To facilitate the prototyping of ubiquitous computing applications for interactive spaces, we developed EagleSense, a real-time human posture and activity recognition system with a single top-view depth sensing camera. We contribute our novel algorithm and processing pipeline, including details for calculating silhouetteextremities features and applying gradient tree boosting classifiers for activity recognition optimised for top-view depth sensing. EagleSense provides easy access to the real-time tracking data and includes tools for facilitating the integration into custom applications. We report the results of a technical evaluation with 12 participants and demonstrate the capabilities of EagleSense with application case studies

    AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures

    Get PDF
    AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air - with 2-5 armatures poseable in 7DoF within the same workspace - to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen"in remote work

    Sensor System for Rescue Robots

    Get PDF
    A majority of rescue worker fatalities are a result of on-scene responses. Existing technologies help assist the first responders in scenarios of no light, and there even exist robots that can navigate radioactive areas. However, none are able to be both quickly deployable and enter hard to reach or unsafe areas in an emergency event such as an earthquake or storm that damages a structure. In this project we created a sensor platform system to augment existing robotic solutions so that rescue workers can search for people in danger while avoiding preventable injury or death and saving time and resources. Our results showed that we were able to map out a 2D map of the room with updates for robot motion on a display while also showing a live thermal image in front of the system. The system is also capable of taking a digital picture from a triggering event and then displaying it on the computer screen. We discovered that data transfer plays a huge role in making different programs like Arduino and Processing interact with each other. Consequently, this needs to be accounted for when improving our project. In particular our project is wired right now but should deliver data wirelessly to be of any practical use. Furthermore, we dipped our feet into SLAM technologies and if our project were to become autonomous, more research into the algorithms would make this autonomy feasible

    Emerging technologies for learning (volume 1)

    Get PDF
    Collection of 5 articles on emerging technologies and trend

    Virtual reality interfaces for seamless interaction with the physical reality

    Get PDF
    In recent years head-mounted displays (HMDs) for virtual reality (VR) have made the transition from research to consumer product, and are increasingly used for productive purposes such as 3D modeling in the automotive industry and teleconferencing. VR allows users to create and experience real-world like models of products; and enables users to have an immersive social interaction with distant colleagues. These solutions are a promising alternative to physical prototypes and meetings, as they require less investment in time and material. VR uses our visual dominance to deliver these experiences, making users believe that they are in another reality. However, while their mind is present in VR their body is in the physical reality. From the user’s perspective, this brings considerable uncertainty to the interaction. Currently, they are forced to take off their HMD in order to, for example, see who is observing them and to understand whether their physical integrity is at risk. This disrupts their interaction in VR, leading to a loss of presence – a main quality measure for the success of VR experiences. In this thesis, I address this uncertainty by developing interfaces that enable users to stay in VR while supporting their awareness of the physical reality. They maintain this awareness without having to take off the headset – which I refer to as seamless interaction with the physical reality. The overarching research vision that guides this thesis is, therefore, to reduce this disconnect between the virtual and physical reality. My research is motivated by a preliminary exploration of user uncertainty towards using VR in co-located, public places. This exploration revealed three main foci: (a) security and privacy, (b) communication with physical collaborators, and (c) managing presence in both the physical and virtual reality. Each theme represents a section in my dissertation, in which I identify central challenges and give directions towards overcoming them as have emerged from the work presented here. First, I investigate security and privacy in co-located situations by revealing to what extent bystanders are able to observe general tasks. In this context, I explicitly investigate the security considerations of authentication mechanisms. I review how existing authentication mechanisms can be transferred to VR and present novel approaches that are more usable and secure than existing solutions from prior work. Second, to support communication between VR users and physical collaborators, I add to the field design implications for VR interactions that enable observers to choose opportune moments to interrupt HMD users. Moreover, I contribute methods for displaying interruptions in VR and discuss their effect on presence and performance. I also found that different virtual presentations of co-located collaborators have an effect on social presence, performance and trust. Third, I close my thesis by investigating methods to manage presence in both the physical and virtual realities. I propose systems and interfaces for transitioning between them that empower users to decide how much they want to be aware of the other reality. Finally, I discuss the opportunity to systematically allocate senses to these two realities: the visual one for VR and the auditory and haptic one for the physical reality. Moreover, I provide specific design guidelines on how to use these findings to alert VR users about physical borders and obstacles.In den letzten Jahren haben Head-Mounted-Displays (HMDs) fĂŒr virtuelle RealitĂ€t (VR) den Übergang von der Forschung zum Konsumprodukt vollzogen und werden zunehmend fĂŒr produktive Zwecke, wie 3D-Modellierung in der Automobilindustrie oder Telekonferenzen, eingesetzt. VR ermöglicht es den Benutzern, schnell und kostengĂŒnstig, Prototypen zu erstellen und erlaubt eine immersive soziale Interaktion mit entfernten Kollegen. VR nutzt unsere visuelle Dominanz, um diese Erfahrungen zu vermitteln und gibt Benutzern das GefĂŒhl sich in einer anderen RealitĂ€t zu befinden. WĂ€hrend der Nutzer jedoch in der virtuellen RealitĂ€t mental prĂ€sent ist, befindet sich der Körper weiterhin in der physischen RealitĂ€t. Aus der Perspektive des Benutzers bringt dies erhebliche Unsicherheit in die Nutzung von HMDs. Aktuell sind Nutzer gezwungen, ihr HMD abzunehmen, um zu sehen, wer sie beobachtet und zu verstehen, ob ihr körperliches Wohlbefinden gefĂ€hrdet ist. Dadurch wird ihre Interaktion in der VR gestört, was zu einem Verlust der PrĂ€senz fĂŒhrt - ein HauptqualitĂ€tsmaß fĂŒr den Erfolg von VR-Erfahrungen. In dieser Arbeit befasse ich mich mit dieser Unsicherheit, indem ich Schnittstellen entwickle, die es den Nutzern ermöglichen, in VR zu bleiben und gleichzeitig unterstĂŒtzen sie die Wahrnehmung fĂŒr die physische RealitĂ€t. Sie behalten diese Wahrnehmung fĂŒr die physische RealitĂ€t bei, ohne das Headset abnehmen zu mĂŒssen - was ich als nahtlose Interaktion mit der physischen RealitĂ€t bezeichne. Daher ist eine ĂŒbergeordenete Vision von meiner Forschung diese Trennung von virtueller und physicher RealitĂ€t zu reduzieren. Meine Forschung basiert auf einer einleitenden Untersuchung, die sich mit der Unsicherheit der Nutzer gegenĂŒber der Verwendung von VR an öffentlichen, geteilten Orten befasst. Im Kontext meiner Arbeit werden RĂ€ume oder FlĂ€chen, die mit anderen ortsgleichen Menschen geteilt werden, als geteilte Orte bezeichnet. Diese Untersuchung ergab drei Hauptschwerpunkte: (1) Sicherheit und PrivatsphĂ€re, (2) Kommunikation mit physischen Kollaborateuren, und (3) Umgang mit der PrĂ€senz, sowohl in der physischen als auch in der virtuellen RealitĂ€t. Jedes Thema stellt einen Fokus in meiner Dissertation dar, in dem ich zentrale Herausforderungen identifiziere und LösungsansĂ€tze vorstelle. Erstens, untersuche ich Sicherheit und PrivatsphĂ€re an öffentlichen, geteilten Orten, indem ich aufdecke, inwieweit Umstehende in der Lage sind, allgemeine Aufgaben zu beobachten. In diesem Zusammenhang untersuche ich explizit die Gestaltung von Authentifizierungsmechanismen. Ich untersuche, wie bestehende Authentifizierungsmechanismen auf VR ĂŒbertragen werden können, und stelle neue AnsĂ€tze vor, die nutzbar und sicher sind. Zweitens, um die Kommunikation zwischen HMD-Nutzern und Umstehenden zu unterstĂŒtzen, erweitere ich das Forschungsfeld um VR-Interaktionen, die es Beobachtern ermöglichen, gĂŒnstige Momente fĂŒr die Unterbrechung von HMD-Nutzern zu wĂ€hlen. DarĂŒber hinaus steuere ich Methoden zur Darstellung von Unterbrechungen in VR bei und diskutiere ihre Auswirkungen auf PrĂ€senz und Leistung von Nutzern. Meine Arbeit brachte auch hervor, dass verschiedene virtuelle PrĂ€sentationen von ortsgleichen Kollaborateuren einen Effekt auf die soziale PrĂ€senz, Leistung und Vertrauen haben. Drittens, schließe ich meine Dissertation mit der Untersuchung von Methoden zur Verwaltung der PrĂ€senz, sowohl in der physischen als auch in der virtuellen RealitĂ€t ab. Ich schlage Systeme und Schnittstellen fĂŒr den Übergang zwischen den RealitĂ€ten vor, die die Benutzer in die Lage versetzen zu entscheiden, inwieweit sie sich der anderen RealitĂ€t bewusst sein wollen. Schließlich diskutiere ich die Möglichkeit, diesen beiden RealitĂ€ten systematisch Sinne zuzuordnen: die visuelle fĂŒr VR und die auditive und haptische fĂŒr die physische RealitĂ€t. DarĂŒber hinaus stelle ich spezifische Design-Richtlinien zur VerfĂŒgung, wie diese Erkenntnisse genutzt werden können, um VR-Anwender auf physische Grenzen und Hindernisse aufmerksam zu machen
    • 

    corecore