20 research outputs found

    Effect of combination therapy of hydroxychloroquine and azithromycin on mortality in patients with COVID-19

    Get PDF
    Conflicting evidence regarding the use of hydroxychloroquine (HCQ) and azithromycin for the treatment of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection do exist. We performed a retrospective single-center cohort study including 377 consecutive patients admitted for pneumonia related to coronavirus disease 2019 (COVID-19). Of these, 297 were in combination treatment, 17 were on HCQ alone, and 63 did not receive either of these 2 drugs because of contraindications. The primary end point was in-hospital death. Mean age was 71.8 ± 13.4 years and 34.2% were women. We recorded 146 deaths: 35 in no treatment, 7 in HCQ treatment group, and 102 in HCQ + azithromycin treatment group (log rank test for Kaplan–Meier curve P < 0.001). At multivariable Cox proportional hazard regression analysis, age (hazard ratio (HR) 1.057, 95% confidence interval (CI) 1.035–1.079, P < 0.001), mechanical ventilation/continuous positive airway pressure (HR 2.726, 95% CI 1.823–4.074, P < 0.001), and C reactive protein above the median (HR 2.191, 95% CI 1.479–3.246, P < 0.001) were directly associated with death, whereas use of HCQ + azithromycin (vs. no treatment; HR 0.265, 95% CI 0.171–0.412, P < 0.001) was inversely associated. In this study, we found a reduced in-hospital mortality in patients treated with a combination of HCQ and azithromycin after adjustment for comorbidities. A large randomized trial is necessary to confirm these findings

    Large clones of pre-existing T cells drive early immunity against SARS-COV-2 and LCMV infection

    Get PDF
    T cell responses precede antibody and may provide early control of infection. We analyzed the clonal basis of this rapid response following SARS-COV-2 infection. We applied T cell receptor (TCR) sequencing to define the trajectories of individual T cell clones immediately. In SARS-COV-2 PCR+ individuals, a wave of TCRs strongly but transiently expand, frequently peaking the same week as the first positive PCR test. These expanding TCR CDR3s were enriched for sequences functionally annotated as SARS-COV-2 specific. Epitopes recognized by the expanding TCRs were highly conserved between SARS-COV-2 strains but not with circulating human coronaviruses. Many expanding CDR3s were present at high frequency in pre-pandemic repertoires. Early response TCRs specific for lymphocytic choriomeningitis virus epitopes were also found at high frequency in the preinfection naive repertoire. High-frequency naive precursors may allow the T cell response to respond rapidly during the crucial early phases of acute viral infection

    Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands.</p> <p>Methods</p> <p>The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances.</p> <p>Results</p> <p>The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only).</p> <p>Conclusions</p> <p>The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic control eases the burden from the user and, as a result, the user can concentrate on what he/she does, not on how he/she should do it. The tests showed that the performance of the controller was satisfactory and that the users were able to operate the system with minimal prior training.</p

    Multi-sensor controlled skills for humanoid robots

    No full text
    The emerging new generation of robots employed in the public as well as in the private environment is called to interact with the man and should therefore be able to manage cooperative tasks and to deal with a variety of different control problems. Thus an intelligent coordination of the information coming from all the available sensors supervising the system and the environment is needed. In this paper a supervisory control concept is briefly described and based on this proposed scheme some human skills involving audio, visual and force sensors are presented

    Vision controlled grasping by means of an intelligent robot hand

    No full text
    In this paper a new visual servoing concept for dynamic grasping for humanoid robots will be presented. It relies on a stereo camera in the robot head for wide range observing in combination with a miniaturized close range camera integrated in a five finger hand. By optimal fusion of both camera information using a fuzzy decision making algorithm a robust visually controlled grasping of objects is achieved even in the case of disturbed signals or dynamic obstacles

    Adaptive predictive gaze control of a redundant humanoid robot head

    No full text
    A general concept for the gaze control of a redundant humanoid robot head is presented. It is based on an adaptive Kalman filter that predicts the next state of the moving target, processing the position information provided by a head-mounted stereo camera. The trajectory tracking control at the task level combines a proportional feedback and a feedforward term. The gains of both control actions are adapted in order to provide optimal dynamic response for unknown arbitrary target trajectories. Inverse differential kinematics is evaluated so that human-like joint motions are achieved. To exploit kinematic redundancy, a weighted pseudoinverse is realized that takes into account different optimization criteria. Additional self-motions of the head are also considered. Experimental results on the head of the humanoid robot ARMAR-III are presented. © 2011 IEEE

    Human-like reflexes for robotic manipulation using leaky integrate-and-fire neurons

    No full text
    In this paper we present an approach to transfer human-like reflex behavior to robots by utilizing leaky integrate-and-fire neurons. For the acceptance of robots in general and humanoid robots, which are even closer to people's daily life, in particular a main aspect is their appearance and how they act and move in human centered environments. Especially safety strategies are crucial for a widespread acceptance of these machines. In our work we target this safety aspect by approaching this issue from the direction how humans respond to external stimuli. To achieve such human-like reflexes a general reflex unit, based on special variants of the leaky integrate-and-fire neuron model has been built. Instances of this reflex unit are adapted to special reflex types and connected to form dependent reflex behaviors. The concept of these neural structures and its evaluation by means of several experiments are presented in this paper. The results are depicted in detail and future aspects of our ongoing work are addressed

    Mobile experimental platform for the development of environmentally interactive control algorithms towards the implementation on a walking humanoid robot

    No full text
    In the framework of the long term Collaborative Research Center SFB 588 "Humanoid Robots" the robot ARMAR III has to manage different basic skills in a highly dynamic environment. They mainly take place in the area close to the robot and require especially its upper body including head, arms and hands. However, during the last project phase the development has been mainly focused on basic skills which have to be accomplished mainly in a wider range and require also walking capability (e.g. carrying a tray). In order to investigate in advance such new environmentally interactive control algorithms, an adequate development platform is required. The mobile experimental platform developed with this goal at Fraunhofer IOSB consists of an upper body including various existing robot components like torso, head, arms and hands, as well as a lower body represented by a mobile robot. A Stewart platform provides then the functionality of a human hip connecting upper and lower body. This hexapod structure is the central element of the proposed platform because it is responsible for a motion of the upper body, that emulates the oscillations of a human gangway. Therefore, the most important steps of its design process will be discussed in this paper. In order to demonstrate the feasibility of the platform, a case study optimizing the gaze control of a humanoid robot head will be presented
    corecore