494 research outputs found

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    The Next Generation BioPhotonics Workstation

    Get PDF

    Natural stimuli for mice: environment statistics and behavioral responses

    Get PDF

    Intrinsic ferroelectric switching in two-dimension α\alpha-In2_2Se3_3

    Full text link
    Two-dimensional (2D) ferroelectric semiconductors present opportunities for integrating ferroelectrics into high-density ultrathin nanoelectronics. Among the few synthesized 2D ferroelectrics, α\alpha-In2_2Se3_3, known for its electrically addressable vertical polarization has attracted significant interest. However, the understanding of many fundamental characteristics of this material, such as the existence of spontaneous in-plane polarization and switching mechanisms, remains controversial, marked by conflicting experimental and theoretical results. Here, our combined experimental characterizations with piezoresponse force microscope and symmetry analysis conclusively dismiss previous claims of in-plane ferroelectricity in α\alpha-In2_2Se3_3. The processes of vertical polarization switching in monolayer α\alpha-In2_2Se3_3 are explored with deep-learning-assisted large-scale molecular dynamics simulations, revealing atomistic mechanisms fundamentally different from those of bulk ferroelectrics. Despite lacking in-plane effective polarization, 1D domain walls can be moved by both out-of-plane and in-plane fields, exhibiting unusual avalanche dynamics characterized by abrupt, intermittent moving patterns. The propagating velocity at various temperatures, field orientations, and strengths can be statistically described with a universal creep equation, featuring a dynamical exponent of 2 that is distinct from all known values for elastic interfaces moving in disordered media. This work rectifies a long-held misunderstanding regarding the in-plane ferroelectricity of α\alpha-In2_2Se3_3, and the quantitative characterizations of domain wall velocity will hold broad implications for both the fundamental understanding and technological applications of 2D ferroelectrics.Comment: 30 pages, 6 figure

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende Verfügbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag geführt. Ferner sind mobile Geräte immer griffbereit und wurden bereits als Interaktionsgeräte für zusätzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berücksichtigt ohne näher auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide Geräte müssen verbunden werden (Modalität). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (Flexibilität). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das übergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau für spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem Mobilgerät interagieren können. Um die Effekte der hinzugefügten Charakteristiken besser zu verstehen, haben wir zwei Prototypen für unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles Gerät auf einen größeren, sekundären Bildschirm zu legen. Gegensätzlich dazu ermöglicht MobileVue die Interaktion mit einem zusätzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. Modalität des Verbindungsaufbaus und Flexibilität der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig über deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres Mobilgeräts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewählt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles Gerät auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswählen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen Mobilgeräten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese Einschränkung, indem wir Zoomen in Kombination mit einer vorübergehenden Pausierung des Videos im Sucher einfügen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusätzlichen Bildschirmen durch mobile Geräte haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu müssen (nicht-modal). Da das mobile Gerät seinen räumlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusätzlich volle Flexibilität in solchen Umgebungen. Darüber hinaus können Benutzer mit zusätzlichen Bildschirmen (unabhängig von deren Größe) in variablen Entfernungen interagieren

    Liquid-crystal photonic applications

    Get PDF

    Human Control Law and Brain Activity of Voluntary Motion by Utilizing a Balancing Task with an Inverted Pendulum

    Get PDF
    Human characteristics concerning voluntary motion control are investigated, because this motion is fundamental for the machine operation and human-computer system. Using a force feedback haptic device and a balancing task of a virtual inverted pendulum, participants were trained in the task, and hand motion/force was measured, and brain activity was monitored. First, through brain analysis by near-infrared spectroscopy (NIRS) and motion analysis of the pendulum, we identified a participant who was the most expert. Next, control characteristics of the most expert were investigated by considering the operational force and delay factor of a human. As a result, it was found that predictive control based on velocity information was used predominantly although a perception feedback control against the pendulum posture worked. And it was shown that an on-off intermittency control, which was a strategy for the skilled balancing, can be described well by a liner model involving two types of time shifts for the position and velocity. In addition, it was confirmed that the cortex activity for observation in an ocular motor control area and visual processing area was strong to enhance above-mentioned control strategies

    Optimization of craniosynostosis surgery: virtual planning, intraoperative 3D photography and surgical navigation

    Get PDF
    Mención Internacional en el título de doctorCraniosynostosis is a congenital defect defined as the premature fusion of one or more cranial sutures. This fusion leads to growth restriction and deformation of the cranium, caused by compensatory expansion parallel to the fused sutures. Surgical correction is the preferred treatment in most cases to excise the fused sutures and to normalize cranial shape. Although multiple technological advancements have arisen in the surgical management of craniosynostosis, interventional planning and surgical correction are still highly dependent on the subjective assessment and artistic judgment of craniofacial surgeons. Therefore, there is a high variability in individual surgeon performance and, thus, in the surgical outcomes. The main objective of this thesis was to explore different approaches to improve the surgical management of craniosynostosis by reducing subjectivity in all stages of the process, from the preoperative virtual planning phase to the intraoperative performance. First, we developed a novel framework for automatic planning of craniosynostosis surgery that enables: calculating a patient-specific normative reference shape to target, estimating optimal bone fragments for remodeling, and computing the most appropriate configuration of fragments in order to achieve the desired target cranial shape. Our results showed that automatic plans were accurate and achieved adequate overcorrection with respect to normative morphology. Surgeons’ feedback indicated that the integration of this technology could increase the accuracy and reduce the duration of the preoperative planning phase. Second, we validated the use of hand-held 3D photography for intraoperative evaluation of the surgical outcome. The accuracy of this technology for 3D modeling and morphology quantification was evaluated using computed tomography imaging as gold-standard. Our results demonstrated that 3D photography could be used to perform accurate 3D reconstructions of the anatomy during surgical interventions and to measure morphological metrics to provide feedback to the surgical team. This technology presents a valuable alternative to computed tomography imaging and can be easily integrated into the current surgical workflow to assist during the intervention. Also, we developed an intraoperative navigation system to provide real-time guidance during craniosynostosis surgeries. This system, based on optical tracking, enables to record the positions of remodeled bone fragments and compare them with the target virtual surgical plan. Our navigation system is based on patient-specific surgical guides, which fit into the patient’s anatomy, to perform patient-to-image registration. In addition, our workflow does not rely on patient’s head immobilization or invasive attachment of dynamic reference frames. After testing our system in five craniosynostosis surgeries, our results demonstrated a high navigation accuracy and optimal surgical outcomes in all cases. Furthermore, the use of navigation did not substantially increase the operative time. Finally, we investigated the use of augmented reality technology as an alternative to navigation for surgical guidance in craniosynostosis surgery. We developed an augmented reality application to visualize the virtual surgical plan overlaid on the surgical field, indicating the predefined osteotomy locations and target bone fragment positions. Our results demonstrated that augmented reality provides sub-millimetric accuracy when guiding both osteotomy and remodeling phases during open cranial vault remodeling. Surgeons’ feedback indicated that this technology could be integrated into the current surgical workflow for the treatment of craniosynostosis. To conclude, in this thesis we evaluated multiple technological advancements to improve the surgical management of craniosynostosis. The integration of these developments into the surgical workflow of craniosynostosis will positively impact the surgical outcomes, increase the efficiency of surgical interventions, and reduce the variability between surgeons and institutions.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidente: Norberto Antonio Malpica González.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Tamas Ung

    Active and passive reduction of high order modes in the gravitational wave detector GEO 600

    Get PDF
    [no abstract
    • …
    corecore