28 research outputs found

    Motion Control of the Hybrid Wheeled-Legged Quadruped Robot Centauro

    Get PDF
    Emerging applications will demand robots to deal with a complex environment, which lacks the structure and predictability of the industrial workspace. Complex scenarios will require robot complexity to increase as well, as compared to classical topologies such as fixed-base manipulators, wheeled mobile platforms, tracked vehicles, and their combinations. Legged robots, such as humanoids and quadrupeds, promise to provide platforms which are flexible enough to handle real world scenarios; however, the improved flexibility comes at the cost of way higher control complexity. As a trade-off, hybrid wheeled-legged robots have been proposed, resulting in the mitigation of control complexity whenever the ground surface is suitable for driving. Following this idea, a new hybrid robot called Centauro has been developed inside the Humanoid and Human Centered Mechatronics lab at Istituto Italiano di Tecnologia (IIT). Centauro is a wheeled-legged quadruped with a humanoid bi-manual upper-body. Differently from other platform of similar concept, Centauro employs customized actuation units, which provide high torque outputs, moderately fast motions, and the possibility to control the exerted torque. Moreover, with more than forty motors moving its limbs, Centauro is a very redundant platform, with the potential to execute many different tasks at the same time. This thesis deals with the design and development of a software architecture, and a control system, tailored to such a robot; both wheeled and legged locomotion strategies have been studied, as well as prioritized, whole-body and interaction controllers exploiting the robot torque control capabilities, and capable to handle the system redundancy. A novel software architecture, made of (i) a real-time robotic middleware, and (ii) a framework for online, prioritized Cartesian controller, forms the basis of the entire work

    Integrated Data, Message, and Process Recovery for Failure Masking in Web Services

    Get PDF
    Modern Web Services applications encompass multiple distributed interacting components, possibly including millions of lines of code written in different programming languages. With this complexity, some bugs often remain undetected despite extensive testing procedures, and occasionally cause transient system failures. Incorrect failure handling in applications often leads to incomplete or to unintentional request executions. A family of recovery protocols called interaction contracts provides a generic solution to this problem by means of system-integrated data, process, and message recovery for multi-tier applications. It is able to mask failures, and allows programmers to concentrate on the application logic, thus speeding up the development process. This thesis consists of two major parts. The first part formally specifies the interaction contracts using the state-and-activity chart language. Moreover, it presents a formal specification of a concrete Web Service that makes use of interaction contracts, and contains no other error-handling actions. The formal specifications undergo verification where crucial safety and liveness properties expressed in temporal logics are mathematically proved by means of model checking. In particular, it is shown that each end-user request is executed exactly once. The second part of the thesis demonstrates the viability of the interaction framework in a real world system. More specifically, a cascadable Web Service platform, EOS, is built based on widely used components, Microsoft Internet Explorer and PHP application server, with interaction contracts integrated into them.Heutige Web-Service-Anwendungen setzen sich aus mehreren verteilten interagierenden Komponenten zusammen. Dabei werden oft mehrere Programmiersprachen eingesetzt, und der Quellcode einer Komponente kann mehrere Millionen Programmzeilen umfassen. In Anbetracht dieser Komplexität bleiben typischerweise einige Programmierfehler trotz intensiver Qualitätssicherung unentdeckt und verursachen vorübergehende Systemsausfälle zur Laufzeit. Eine ungenügende Fehlerbehandlung in Anwendungen führt oft zur unvollständigen oder unbeabsichtigt wiederholten Ausführung einer Operation. Eine Familie von Recovery-Protokollen, die so genannten "Interaction Contracts", bietet eine generische Lösung dieses Problems. Diese Recovery- Protokolle sorgen für die Fehlermaskierung und ermöglichen somit, dass Entwickler ihre ganze Konzentration der Anwendungslogik widmen können. Dies trägt zu einer erheblichen Beschleunigung des Entwicklungsprozesses bei. Diese Dissertation besteht aus zwei wesentlichen Teilen. Der erste Teil widmet sich der formalen Spezifikation der Recovery-Protokolle unter Verwendung des Formalismus der State-and-Activity-Charts. Darüber hinaus entwickeln wir die formale Spezifikation einer Web-Service-Anwendung, die außer den Recovery-Protokollen keine weitere Fehlerbehandlung beinhaltet. Die formalen Spezifikationen werden in Bezug auf kritische Sicherheits- und Lebendigkeitseigenschaften, die als temporallogische Formeln angegeben sind, mittels "Model Checking" verifiziert. Unter anderem wird somit mathematisch bewiesen, dass jede Operation eines Endbenutzers genau einmal ausgeführt wird. Der zweite Teil der Dissertation beschreibt die Implementierung der Recovery- Protokolle im Rahmen einer beliebig verteilbaren Web-Service-Plattform EOS, die auf weit verbreiteten Web-Produkten aufbaut: dem Browser "Microsoft Internet Explorer" und dem PHP-Anwendungsserver

    Proactive measurement techniques for network monitoring in heterogeneous environments

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones, 201

    Implementation of a robot platform to study bipedal walking

    Get PDF
    On this project, a modi cation of an open source, 3D printed robot, was implemented, with the purpose to create a more a ordable bipedal platform proper for studying Bipedal Walking algorithms. The original robot is a part of an open-source platform, called Poppy, that is formed from an interdisciplinary community of beginners and experts. One of the robots of this platform, is the Poppy Humanoid. The rigid parts of the Poppy Humanoid (as well as the rest of the Poppy platform robots) are 3D printed, a key factor of lowering the cost of a robot. The actuators used though, are expensive commercial DC-motors that increase the total cost of the robot drastically. This high cost of the actuators of Poppy, led this project to modify cheaper actuators while maintaining the same performance of their predecessors. Taking apart the components of the cheaper actuator, only the motor, the gears and the case that host them were kept, and a new design was made to control the motor and to meet the requirements set from the commercial motors. This new design of the actuator include a 12-bit resolution magnetic encoder to read the position of the shaft of the motor, a driver to run the motor, and also an embedded Arduino micro-controller. This feature of an Arduino as part of the actuator, gives the advantage over the commercial motor, as the user has the freedom to upload his own codes and to implement his own motor controllers. The result is a fully programmable actuator hosted on the same motor case. The size of this actuator though, is di erent from the commercial one. In order to mount the new actuators to the platform, Joan Guasch designed proper 3D printed parts. Apart of these parts, Joan also modi ed the leg design, in order to add another joint on the ankle (roll) as this Degree of Freedom (DoF) is important for Bipedal Walking algorithms and was missing from the original Poppy Humanoid leg design. The modi ed robot, is called Poppy-UPC and is a 12 DoF biped platform. For the communication between the motors and the main computer unit, a serial communication protocol was implemented based to the RS-485 standard. Multiple receivers (motors and sensors) can be connected to such a network in a linear, multi-drop con guration. The main computer unit of Poppy-UPC is an Odroid-C1 board. Essentially, this board is a Quad-core Linux computer fully compatible to run ROS. Odroid is acting as the master of the network and is gathering all the informations of the connected nodes, in order to publish them in ROS-topics. That way, the Poppy-UPC is connected to the ROS environment and ROS packages can be used for any further implementation with this platform. Finally, following the open-source spirit of the Poppy platform, all the codes and information are available at https://github.com/dimitris-zervas

    Nuclear Structure Relevant to Double-beta Decay: Studies of ⁷⁶Ge and ⁷⁶Se using Inelastic Neutron Scattering

    Get PDF
    While neutrino oscillations indicate that neutrino flavors mix and that neutrinos have mass, they do not supply information on the absolute mass scale of the three flavors of neutrinos. Currently, the only viable way to determine this mass scale is through the observation of the theoretically predicted process of neutrinoless double-beta decay (0νββ). This yet-to-be-observed decay process is speculated to occur in a handful of nuclei and has predicted half-lives greater than 10²⁵ years. Observation of 0νββ is the goal of several large-scale, multinational efforts and consists of detecting a sharp peak in the summed β energies at the Q-value of the reaction. An exceptional candidate for the observation of 0νββ is ⁷⁶Ge, which offers an excellent combination of capabilities and sensitivities, and two such collaborations, MAJORANA and GERDA, propose tonne-scale experiments that have already begun initial phases using a fraction of the material. The absolute scale of the neutrino masses hinges on a matrix element, which depends on the ground-state wave functions for both the parent (⁷⁶Ge) and daughter (⁷⁶Se) nuclei in the 0νββ decay and can only be calculated from nuclear structure models. Efforts to provide information on the applicability of these models have been undertaken at the University of Kentucky Accelerator Laboratory using gamma-ray spectroscopy following inelastic scattering reactions with monoenergetic, accelerator-produced fast neutrons. Information on new energy levels and transitions, spin and parity assignments, lifetimes, multipole mixing ratios, and transition probabilities have been determined for ⁷⁶Se, the daughter of ⁷⁶Ge 0νββ, up to 3.0 MeV. Additionally, inaccuracies in the accepted level schemes have been addressed. Observation of 0νββ requires precise knowledge of potential contributors to background within the region of interest, i.e., approximately 2039 keV for ⁷⁶Ge. In addition to backgrounds resulting from surrounding materials in the experimental setup, ⁷⁶Ge has a previously observed 3952-keV level with a de-exciting 2040-keV γ ray. This γ ray constitutes a potential background for 0νββ searches, if this level is excited. The cross sections for this level and, subsequently, for the 2040-keV γ ray has been determined in the range from 4 to 5 MeV

    Recent developments in GEANT 4

    Get PDF
    Fil: Depaola, Gerardo Osvaldo. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.GEANT4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection. Over the past several years, major changes have been made to the toolkit in order to accommodate the needs of these user communities, and to efficiently exploit the growth of computing power made available by advances in technology. The adaptation of GEANT4 to multithreading, advances in physics, detector modeling and visualization, extensions to the toolkit, including biasing and reverse Monte Carlo, and tools for physics and release validation are discussed here.info:eu-repo/semantics/publishedVersionFil: Depaola, Gerardo Osvaldo. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.Física de Partículas y Campo

    The development of x-ray backscatter imaging systema through simulation

    Get PDF
    X-ray backscatter has applications in defence and security, medical imaging, astrophysics and industry. The development and testing of X-ray backscatter imaging systems can be achieved not only by experiment, but also by using Monte-Carlo modelling. The PENELOPE simulation package was chosen for its versatility and transparency. However, PENELOPE is a radiation transport package that is not user-friendly, is not inherently compatible with parallel processing, and is not equipped with the facility to process output data in a way that replicates the output from imaging plates or energy dispersive detectors. Tools called PENMAT and PAXI were written in MATLAB to extend the capability of PENELOPE and so enable the efficient exploration of X-ray backscatter imaging which is the focus of this study. The enhanced PENELOPE suite was used to model a real thermionic source to validate the process by comparison with experiment, and model virtual sources suitable for exploring fundamental principles of backscatter. Virtual sources were conceived and designed to efficiently characterise various imaging system features. These include mono-directional and mono-energetic sources (to isolate energy dependant scattering cross sections), flat spectrum sources (to objectively characterise transmission through mask materials) and thin ‘wire form’ sources (to simultaneously characterise the spatial resolution and field of view of X-ray optics). A process of using virtual detectors to feed the input of virtual sources was used to shortcut the repeated computationally expensive modelling of a thermionic tube. With this efficient process and parallel computing, various combinations of pinhole and Coded Aperture optics could be efficiently tested and compared. To enable systematic comparisons the image quality metrics of signal, noise, contrast, resolution, field of view etc. are identified and procedures developed to extract them from images. ii For the experimental energy range of likely practical use, it was found that pure tungsten masks were superior to other alloys studied and that a 2mm pinhole gave the most generally suitable resolution/signal compromise. The results were consistent with physical experiment. A range of Coded Apertures were also modelled and compared favourably to experiment. The pinhole work on field of view informs the envelope within which coded apertures could avoid partial coding. The HEXITEC energy dispersive image plate was used to collect experimental images from a multi material quadrant. The image was simulated accurately using PAXI. Further, modelling with PAXI allowed the distinct interaction processes giving rise to image characteristics to be isolated. This concept was extended with a unique and innovative 2π hemispherical detector, which efficiently captured backscatter X-rays from carbon, copper, manganese dioxide, and lead when shielded and unshielded. This process allowed the brightness of materials to be studied, as governed by the complex combination of attenuation and cross section with angle. Further, the relative contributions from Compton, elastic and fluorescent processes to image brightness and spectral features could be isolated and compared with angle. This was conducted with/without shielding. This cannot be achieved by experiment, and pilots how modelling can inform the best beam energies and detector angles where the backscatter X-rays contain the right information to characterise materials and structures. This work includes significant use of simulation and also a strong supporting element of physical experimentation. The development of modelling techniques and their exploitation can give information that physical experiment cannot, whilst experimentation has been shown to validate the use of simulation and identify some limitation

    Investigating call control using MGCP in conjuction with SIP and H.323

    Get PDF
    Telephony used to mean using a telephone to call another telephone on the Public Switched Telephone Network (PSTN), and data networks were used purely to allow computers to communicate. However, with the advent of the Internet, telephony services have been extended to run on data networks. Telephone calls within the IP network are known as Voice over IP. These calls are carried by a number of protocols, with the most popular ones currently being Session Initiation Protocol (SIP) and H.323. Calls can be made from the IP network to the PSTN and vice versa through the use of a gateway. The gateway translates the packets from the IP network to circuits on the PSTN and vice versa to facilitate calls between the two networks. Gateways have evolved and are now split into two entities using the master/slave architecture. The master is an intelligent Media Gateway Controller (MGC) that handles the call control and signalling. The slave is a "dumb" Media Gateway (MG) that handles the translation of the media. The current gateway control protocols in use are Megaco/H.248, MGCP and Skinny. These protocols have proved themselves on the edge of the network. Furthermore, since they communicate with the call signalling VoIP protocols as well as the PSTN, they have to be the lingua franca between the two networks. Within the VoIP network, the numbers of call signalling protocols make it difficult to communicate with each other and to create services. This research investigates the use of Gateway Control Protocols as the lowest common denominator between the call signalling protocols SIP and H.323. More specifically, it uses MGCP to investigate service creation. It also considers the use of MGCP as a protocol translator between SIP and H.323. A service was created using MGCP to allow H.323 endpoints to send Short Message Service (SMS) messages. This service was then extended with minimal effort to SIP endpoints. This service investigated MGCP’s ability to handle call control from the H.323 and SIP endpoints. An MGC was then successfully used to perform as a protocol translator between SIP and H.323
    corecore