186 research outputs found

    Interaction modes between nanosized grapheneflakes and liposomes: adsorption, insertion and membrane fusion

    Get PDF
    Background: Understanding the effects of graphene-based nanomaterials on lipid membranes is critical to determine their environmental impact and their efficiency in the biomedical context. Graphene has been reported to favourably interact with biological and model lipid membranes. Methods: We report on a systematic coarse-grained molecular dynamics study of the interaction modes of graphene nanometric flakes with POPC/cholesterol liposome membranes. We have simulated graphene layers with a variety of sizes and oxidation degrees, and we have analyzed the trajectories, the interaction modes, and the energetics of the observed phenomena.Results:Three interaction modes are reported. Graphene can be transiently adsorbed onto the liposome membrane and/or inserted in its hydrophobic region. Inserted nanosheets prefer a perpendicular orientation, and tilt in order to maximize the contact with phospholipid tails while avoiding the contact with cholesterol molecules.When placed between two liposomes, graphene facilitates their fusion in a single vesicle. Conclusions: Graphene can be temporary adsorbed on the liposome before insertion. Bilayer curvature has an influence on the orientation of inserted graphene particles. Cholesterol molecules are depleted from the surrounding of graphene particles. Graphene layers may catalyse membrane fusion by bypassing the energy barrier required in stalk formation. General significance: Nanometric graphene layers can be adsorbed/inserted in lipid-based membranes in different manners and affect the cholesterol distribution in the membrane, implying important consequences on the structure and functionality of biological cell membranes, and on the bioaccumulation of graphene in living organisms. The graphene-mediated mechanism opens new possibilities for vesicle fusion in the experimental context

    AATR an ionospheric activity indicator specifically based on GNSS measurements

    Get PDF
    This work reviews an ionospheric activity indicator useful for identifying disturbed periods affecting the performance of Global Navigation Satellite System (GNSS). This index is based in the Along Arc TEC Rate (AATR) and can be easily computed from dual-frequency GNSS measurements. The AATR indicator has been assessed over more than one Solar Cycle (2002–2017) involving about 140 receivers distributed world-wide. Results show that it is well correlated with the ionospheric activity and, unlike other global indicators linked to the geomagnetic activity (i.e. DST or Ap), it is sensitive to the regional behaviour of the ionosphere and identifies specific effects on GNSS users. Moreover, from a devoted analysis of different Satellite Based Augmentation System (SBAS) performances in different ionospheric conditions, it follows that the AATR indicator is a very suitable mean to reveal whether SBAS service availability anomalies are linked to the ionosphere. On this account, the AATR indicator has been selected as the metric to characterise the ionosphere operational conditions in the frame of the European Space Agency activities on the European Geostationary Navigation Overlay System (EGNOS). The AATR index has been adopted as a standard tool by the International Civil Aviation Organization (ICAO) for joint ionospheric studies in SBAS. In this work we explain how the AATR is computed, paying special attention to the cycle-slip detection, which is one of the key issues in the AATR computation, not fully addressed in other indicators such as the Rate Of change of the TEC Index (ROTI). After this explanation we present some of the main conclusions about the ionospheric activity that can extracted from the AATR values during the above mentioned long-term study. These conclusions are: (a) the different spatial correlation related with the MOdified DIP (MODIP) which allows to clearly separate high, mid and low latitude regions, (b) the large spatial correlation in mid latitude regions which allows to define a planetary index, similar to the geomagnetic ones, (c) the seasonal dependency which is related with the longitude and (d) the variation of the AATR value at different time scales (hourly, daily, seasonal, among others) which confirms most of the well-known time dependences of the ionospheric events, and finally, (e) the relationship with the space weather events.Postprint (published version

    Remote Programming of Multirobot Systems within the UPC-UJI Telelaboratories: System Architecture and Agent-Based Multirobot Control

    Get PDF
    One of the areas that needs further improvement within E-Learning environments via Internet (A big effort is required in this area if progress is to be made) is allowing students to access and practice real experiments in a real laboratory, instead of using simulations [1]. Real laboratories allow students to acquire methods, skills and experience related to real equipment, in a manner that is very close to the way they are being used in industry. The purpose of the project is the study, development and implementation of an E-Learning environment to allow undergraduate students to practice subjects related to Robotics and Artificial Intelligence. The system, which is now at a preliminary stage, will allow the remote experimentation with real robotic devices (i.e. robots, cameras, etc.). It will enable the student to learn in a collaborative manner (remote participation with other students) where it will be possible to combine the onsite activities (performed “in-situ” within the real lab during the normal practical sessions), with the “on-line” one (performed remotely from home via the Internet). Moreover, the remote experiments within the E-Laboratory to control the real robots can be performed by both, students and even scientist. This project is under development and it is carried out jointly by two Universities (UPC and UJI). In this article we present the system architecture and the way students and researchers have been able to perform a Remote Programming of Multirobot Systems via web

    Multi-Sensor Localization and Navigation for Remote Manipulation in Smoky Areas

    Get PDF
    Abstract When localizing mobile sensors and actuators in indoor  environments  laser  meters,  ultrasonic  meters  or  even image processing techniques are usually used. On  the  other  hand,  in  smoky  conditions,  due  to  a  fire  or  building collapse, once the smoke or dust density grows,  optical  methods  are  not  efficient  anymore.  In  these  scenarios  other  type  of  sensors  must  be  used,  such  as  sonar,  radar  or  radiofrequency  signals.  Indoor  localization in low‐visibility  conditions due to  smoke is  one of the EU GUARDIANS [1] project goals.   The developed method aims to position a robot in front  of doors, fire extinguishers and other points of interest  with  enough  accuracy  to  allow  a  human  operator  to  manipulate the robot’s arm in order to actuate over the  element.  In  coarse‐grain  localization,  a  fingerprinting technique  based  on  ZigBee  and  WiFi  signals  is  used,  allowing  the robot  to  navigate  inside  the  building  in  order  to  get  near  the  point  of  interest  that  requires  manipulation.  In  fine‐grained  localization  a  remotely  controlled  programmable  high  intensity  LED  panel  is  used, which acts as a reference to the system in smoky  conditions.  Then,  smoke  detection  and  visual  fine‐ grained localization are used to position the robot with  precisely in the manipulation point (e.g., doors, valves,  etc.)

    A New Virtual Reality Interface for Underwater Intervention Missions

    Get PDF
    Ponencia presentada en IFAC-PapersOnLine, Volume 53, Issue 2, 2020, Pages 14600-14607Nowadays, most underwater intervention missions are developed through the well-known work-class ROVs (Remote Operated Vehicles), equipped with teleoperated arms under human supervision. Thus, despite the appearance on the market of the first prototypes of the so-called I-AUV (Autonomous Underwater Vehicles for Intervention), the most mature technology associated with ROVs continues to be trusted. In order to fill the gap between ROVs and incipient I-AUVs technology, new research is under progress in our laboratory. In particular, new HRI (Human Robot Interaction) capabilities are being tested inside a three-year Spanish coordinated project focused on cooperative underwater intervention missions. In this work new results are presented concerning a new user interface which includes immersion capabilities through Virtual Reality (VR) technology. It is worth noting that a new HRI module has been demonstrated, through a pilot study, in which the users had to solve some specific tasks, with minimum guidance and instructions, following simple Problem Based Learning (PBL) scheme. Finally, it is noticeable that, although this is only a work in progress, the obtained results are promising concerning friendly and intuitive characteristics of the developed HRI module. Thus, some critical aspects, like complexity fall, training time and cognitive fatigue of the ROV pilot, seem more affordable now

    Preliminary Work on a Virtual Reality Interface for the Guidance of Underwater Robots

    Get PDF
    The need for intervention in underwater environments has increased in recent years but there is still a long way to go before AUVs (Autonomous Underwater Vehicleswill be able to cope with really challenging missions. Nowadays, the solution adopted is mainly based on remote operated vehicle (ROV) technology. These ROVs are controlled from support vessels by using unnecessarily complex human–robot interfaces (HRI). Therefore, it is necessary to reduce the complexity of these systems to make them easier to use and to reduce the stress on the operator. In this paper, and as part of the TWIN roBOTs for the cooperative underwater intervention missions (TWINBOT) project, we present an HRI (Human-Robot Interface) module which includes virtual reality (VR) technology. In fact, this contribution is an improvement on a preliminary study in this field also carried out, by our laboratory. Hence, having made a concerted effort to improve usability, the HRI system designed for robot control tasks presented in this paper is substantially easier to use. In summary, reliability and feasibility of this HRI module have been demonstrated thanks to the usability tests, which include a very complete pilot study, and guarantee much more friendly and intuitive properties in the final HRI-developed module presented here

    CompaRob: the shopping cart assistance robot

    Get PDF
    Technology has recently been developed which offers an excellent opportunity to design systems with the ability to help people in their own houses. In particular, assisting elderly people in their environments is something that can significantly improve their quality of life. However, helping elderly people outside their usual environment is also necessary, to help them to carry out daily tasks like shopping. In this paper we present a person-following shopping cart assistance robot, capable of helping elderly people to carry products in a supermarket. First of all, the paper presents a survey of related systems that perform this task, using different approaches, such as attachable modules and computer vision. After that, the paper describes in detail the proposed system and its main features. The cart uses ultrasonic sensors and radio signals to provide a simple and effective person localization and following method. Moreover, the cart can be connected to a portable device like a smartphone or tablet, thus providing ease of use to the end user. The prototype has been tested in a grocery store, while simulations have been done to analyse its scalability in larger spaces where multiple robots could coexist.This work was partly supported by Spanish Ministry under Grant DPI2014-57746-C3 (MERBOTS Project) and by Universitat Jaume I Grants P1-1B2015-68 and PID2010-12

    Feasibility of precise navigation in high and low latitude regions under scintillation conditions

    Get PDF
    Scintillation is one of the most challenging problems in Global Navigation Satellite Systems (GNSS) navigation. This phenomenon appears when the radio signal passes through ionospheric irregularities. These irregularities represent rapid changes on the refraction index and, depending on their size, they can produce also diffractive effects affecting the signal amplitude and, eventually producing cycle slips. In this work, we show that the scintillation effects on the GNSS signal are quite different in low and high latitudes. For low latitude receivers, the main effects, from the point of view of precise navigation, are the increase of the carrier phase noise (measured by s¿) and the fade on the signal intensity (measured by S4) that can produce cycle slips in the GNSS signal. With several examples, we show that the detection of these cycle slips is the most challenging problem for precise navigation, in such a way that, if these cycle slips are detected, precise navigation can be achieved in these regions under scintillation conditions. For high-latitude receivers the situation differs. In this region the size of the irregularities is typically larger than the Fresnel length, so the main effects are related with the fast change on the refractive index associated to the fast movement of the irregularities (which can reach velocities up to several km/s). Consequently, the main effect on the GNSS signals is a fast fluctuation of the carrier phase (large s¿), but with a moderate fade in the amplitude (moderate S4). Therefore, as shown through several examples, fluctuations at high-latitude usually do not produce cycle slips, being the effect quite limited on the ionosphere-free combination and, in general, precise navigation can be achieved also during strong scintillation conditions.Postprint (published version

    Monocular Robust Depth Estimation Vision System for Robotic Tasks Interventions in Metallic Targets

    Get PDF
    Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot using manual control in critical steps, as well as semi-autonomous behaviours in more secure scenarios, by using, for example, object tracking and recognition techniques. This paper describes a novel vision system to track and estimate the depth of metallic targets for robotic interventions. The system has been designed for on-hand monocular cameras, focusing on solving lack of visibility and partial occlusions. This solution has been validated during real interventions at the Centre for Nuclear Research (CERN) accelerator facilities, achieving 95% success in autonomous mode and 100% in a supervised manner. The system increases the safety and efficiency of the robotic operations, reducing the cognitive fatigue of the operator during non-critical mission phases. The integration of such an assistance system is especially important when facing complex (or repetitive) tasks, in order to reduce the work load and accumulated stress of the operator, enhancing the performance and safety of the mission
    corecore