79 research outputs found

    Low-Cost Inventions and Patents

    Get PDF
    Inventions have led to the technological advances of mankind. There are inventions of all kinds, some of which have lasted hundreds of years or even longer. Low-cost technologies are expected to be easy to build, have little or no energy consumption, and be easy to maintain and operate. The use of sustainable technologies is essential in order to move towards a greater global coverage of technology, and therefore to improve human quality of life. Low-cost products always respond to a specific need, even if no in-depth analysis of the situation or possible solutions has been carried out. It is a consensus in all industrialized countries that patents have a decisive influence on the organization of the economy, as they are a key element in promoting technological innovation. Patents must aim to promote the technological development of countries, starting from their industrial situations

    Command and Control Systems for Search and Rescue Robots

    Get PDF
    The novel application of unmanned systems in the domain of humanitarian Search and Rescue (SAR) operations has created a need to develop specific multi-Robot Command and Control (RC2) systems. This societal application of robotics requires human-robot interfaces for controlling a large fleet of heterogeneous robots deployed in multiple domains of operation (ground, aerial and marine). This chapter provides an overview of the Command, Control and Intelligence (C2I) system developed within the scope of Integrated Components for Assisted Rescue and Unmanned Search operations (ICARUS). The life cycle of the system begins with a description of use cases and the deployment scenarios in collaboration with SAR teams as end-users. This is followed by an illustration of the system design and architecture, core technologies used in implementing the C2I, iterative integration phases with field deployments for evaluating and improving the system. The main subcomponents consist of a central Mission Planning and Coordination System (MPCS), field Robot Command and Control (RC2) subsystems with a portable force-feedback exoskeleton interface for robot arm tele-manipulation and field mobile devices. The distribution of these C2I subsystems with their communication links for unmanned SAR operations is described in detail. Field demonstrations of the C2I system with SAR personnel assisted by unmanned systems provide an outlook for implementing such systems into mainstream SAR operations in the future

    Advanced Mobile Robotics: Volume 3

    Get PDF
    Mobile robotics is a challenging field with great potential. It covers disciplines including electrical engineering, mechanical engineering, computer science, cognitive science, and social science. It is essential to the design of automated robots, in combination with artificial intelligence, vision, and sensor technologies. Mobile robots are widely used for surveillance, guidance, transportation and entertainment tasks, as well as medical applications. This Special Issue intends to concentrate on recent developments concerning mobile robots and the research surrounding them to enhance studies on the fundamental problems observed in the robots. Various multidisciplinary approaches and integrative contributions including navigation, learning and adaptation, networked system, biologically inspired robots and cognitive methods are welcome contributions to this Special Issue, both from a research and an application perspective

    Development of an Underground Mine Scout Robot

    No full text
    Despite increased safety and improved technology in the mining industry, fatal disasters still occur. Robots have the potential to be an invaluable resource for search and rescue teams to scout dangerous or difficult situations. Existing underground mine search and rescue robots have demonstrated limited success. Identified through literature, the two primary concerns are unreliable locomotion systems and a lack of underground mine environment consideration. HADES, an underground mine disaster scout, addresses these issues with a unique chassis and novel locomotion. A system level design is carried out, addressing the difficulties of underground mine environments. To operate in an explosive atmosphere, a purge and pressurisation system is applied to a fibre glass chassis, with intrinsic safety incorporated into the sensor design. To prevent dust, dirt and water damaging the electronics, ingress protection is applied through sealing. The chassis is invertible, with a low centre of gravity and a roll-axis pivot. This chassis design, in combination with spoked-wheels allows traversal of the debris and rubble of a disaster site. Electrochemical gas sensors are incorporated, along with RGB-D cameras, two-way audio and various other environment sensors. A communication system combining a tether and mesh network is designed, with wireless nodes to increase wireless range and reliability. Electronic hardware and software control are implemented to produce an operational scout robot. HADES is 0.7 × 0.6 × 0.4 m, with a sealed IP65 chassis. The locomotion system is robust and effective, able to traverse most debris and rubble, as tested on the university grounds and at a clean landfill. Bottoming out is the only problem encountered, but can be avoided by approaching obstacles correctly. The motor drive system is able to drive HADES at walking speed (1.4 m/s) and it provides more torque than traction allows. Six Lithium-Polymer batteries enable 2 hours 28 minutes of continuous operation. At 20 kg and ~$7000, HADES is a portable, inexpensive scout robot for underground mine disasters

    Control System Development and Technological Investigation for a Climbing Robot in Offshore platforms

    Get PDF
    Denne oppgaven gÄr gjennom forskjellige type teknologier for Ä utvikle en mobil offshore klatre robot

    A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation

    Get PDF
    The application of laser technologies in surgical interventions has been accepted in the clinical domain due to their atraumatic properties. In addition to manual application of fibre-guided lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours has been prevailed in ENT surgery. However, TLM requires many years of surgical training for tumour resection in order to preserve the function of adjacent organs and thus preserve the patient’s quality of life. The positioning of the microscopic laser applicator outside the patient can also impede a direct line-of-sight to the target area due to anatomical variability and limit the working space. Further clinical challenges include positioning the laser focus on the tissue surface, imaging, planning and performing laser ablation, and motion of the target area during surgery. This dissertation aims to address the limitations of TLM through robotic approaches and intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no highly integrated platform for endoscopic delivery of focused laser radiation is available to date. Likewise, there are no known devices that incorporate scene information from endoscopic imaging into ablation planning and execution. For focusing of the laser beam close to the target tissue, this work first presents miniaturised focusing optics that can be integrated into endoscopic systems. Experimental trials characterise the optical properties and the ablation performance. A robotic platform is realised for manipulation of the focusing optics. This is based on a variable-length continuum manipulator. The latter enables movements of the endoscopic end effector in five degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the robot are integrated into a modular framework that is evaluated experimentally. The manipulation of focused laser radiation also requires precise adjustment of the focal position on the tissue. For this purpose, visual, haptic and visual-haptic assistance functions are presented. These support the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic assistance are demonstrated in a user study. The system performance and usability of the overall robotic system are assessed in an additional user study. Analogous to a clinical scenario, the subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergefĂŒhrten Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion jedoch ein langjĂ€hriges chirurgisches Training, um die Funktion der angrenzenden Organe zu sichern und damit die LebensqualitĂ€t der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet durch anatomische VariabilitĂ€t erschweren und den Arbeitsraum einschrĂ€nken. Weitere klinische Herausforderungen betreffen die Positionierung des Laserfokus auf der GewebeoberflĂ€che, die Bildgebung, die Planung und AusfĂŒhrung der Laserablation sowie intraoperative Bewegungen des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch robotische AnsĂ€tze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen fĂŒr die endoskopische Applikation fokussierter Laserstrahlung verfĂŒgbar. Ebenfalls sind keine Systeme bekannt, die Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausfĂŒhrung einbeziehen. FĂŒr eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunĂ€chst eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem lĂ€ngenverĂ€nderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fĂŒnf Freiheitsgraden. Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine prĂ€zise Anpassung der Fokuslage auf das Gewebe. DafĂŒr werden visuelle, haptische und visuell haptische Assistenzfunktionen eingefĂŒhrt. Diese unterstĂŒtzen den Anwender bei Teleoperation zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die mittlere Positioniergenauigkeit des Spots betrĂ€gt dabei 0,5 mm. Zur Automatisierung der Ablation werden abschließend Methoden der bildgestĂŒtzten Regelung vorgestellt. Experimente bestĂ€tigen einen positiven Effekt der Automationskonzepte fĂŒr die kontaktfreie Laserchirurgie

    Understanding and designing for control in camera operation

    Get PDF
    Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlĂ€ngst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingeführt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwünschte Nebeneffekte. GrundsĂ€tzlich wollen sich Kameraleute wĂ€hrend der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fühlen. WĂ€hrend Automatisierung hierbei Experten unterstützen und AnfĂ€nger befĂ€higen kann, führt sie unweigerlich auch zu einem gewissen Verlust an gewünschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstützen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein Gefühl von Kontrolle vermitteln? In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domĂ€nenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf Nutzerbedürfnisse einzugehen, müssen sich Schnittstellen zum Beispiel an diese EinschrĂ€nkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches VerstĂ€ndnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trĂ€gt diese Arbeit zu einem besseren VerstĂ€ndnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen. Diese Arbeit prĂ€sentiert dazu konkret drei BeitrĂ€ge: Zuerst beschreiben wir ethnographische Studien über Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. Kontrollgefühl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass für den Einsatz im Feld geeignet ist. Das Toolkit stellt Software für eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem KontrollqualitĂ€t und -gefühl bestimmt. Darin erweitern wir etablierte AnsĂ€tze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, prĂ€sentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an KontrollqualitĂ€t und -gefühl bei unseren AnsĂ€tzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstützen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control? Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design. The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices

    Robotics 2010

    Get PDF
    Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development

    A Biologically Inspired Jumping and Rolling Robot

    Get PDF
    Mobile robots for rough terrain are of interest to researchers as their range of possible uses is large, including exploration activities for inhospitable areas on Earth and on other planets and bodies in the solar system, searching in disaster sites for survivors, and performing surveillance for military applications. Nature generally achieves land movement by walking using legs, but additional modes such as climbing, jumping and rolling are all produced from legs as well. Robotics tends not to use this integrated approach and adds additional mechanisms to achieve additional movements. The spherical device described within this thesis, called Jollbot, integrated a rolling motion for faster movement over smoother terrain, with a jumping movement for rougher environments. Jollbot was developed over three prototypes. The first achieved pause-and-leap style jumps by slowly storing strain energy within the metal elements of a spherical structure using an internal mechanism to deform the sphere. A jump was produced when this stored energy was rapidly released. The second prototype achieved greater jump heights using a similar structure, and added direction control to each jump by moving its centre of gravity around the polar axis of the sphere. The final prototype successfully combined rolling (at a speed of 0.7 m/s, up 4° slopes, and over 44 mm obstacles) and jumping (0.5 m cleared height), both with direction control, using a 0.6 m spherical spring steel structure. Rolling was achieved by moving the centre of gravity outside of the sphere’s contact area with the ground. Jumping was achieved by deflecting the sphere in a similar method to the first and second prototypes, but through a larger percentage deflection. An evaluation of existing rough terrain robots is made possible through the development of a five-step scoring system that produces a single numerical performance score. The system is used to evaluate the performance of Jollbot.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Get PDF
    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications
    • 

    corecore