3,717 research outputs found

    Multi Agent Micromanipulation System

    Get PDF
    In the area of biotechnology, a micromanipulation is widely used for such purposes as operating on genes and transferring biological materials into cells. For the some experiments, such as biochemical experiment, a large number of cells have to be manipulated in a short time. We have developed an automatic micromanipulation system under the stereoscopic microscope. Micromanipulation system carries out various processes, such as detection of the target, the detection of the needle head, and motor control. By sharing these processes with several computers, the micromanipulation can be performed at high speed. As a result, computer cooperation becomes very important. In this paper, we propose a multi agent micromanipulation system. At first, we developed a multi agent system, which performs image processing, motor control, and management of the micromanipulation processes. Secondarily, we proposed to operate computers cooperative. We use a computer as a single agent. And several computers are connected to a local area network. The multi agent micromanipulation system performed the micromanipulation at a realistic rate through cooperation of multi agents.</p

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Depth Image Processing for Obstacle Avoidance of an Autonomous VTOL UAV

    Get PDF
    We describe a new approach for stereo-based obstacle avoidance. This method analyzes the images of a stereo camera in realtime and searches for a safe target point that can be reached without collision. The obstacle avoidance system is used by our unmanned helicopter ARTIS (Autonomous Rotorcraft Testbed for Intelligent Systems) and its simulation environment. It is optimized for this UAV, but not limited to aircraft systems

    Autonomous vehicle guidance in unknown environments

    Get PDF
    Gaining from significant advances in their performance granted by technological evolution, Autonomous Vehicles are rapidly increasing the number of fields of possible and effective applications. From operations in hostile, dangerous environments (military use in removing unexploded projectiles, survey of nuclear power and chemical industrial plants following accidents) to repetitive 24h tasks (border surveillance), from power-multipliers helping in production to less exotic commercial application in household activities (cleaning robots as consumer electronics products), the combination of autonomy and motion offers nowadays impressive options. In fact, an autonomous vehicle can be completed by a number of sensors, actuators, devices making it able to exploit a quite large number of tasks. However, in order to successfully attain these results, the vehicle should be capable to navigate its path in different, sometimes unknown environments. This is the goal of this dissertation: to analyze and - mainly - to propose a suitable solution for the guidance of autonomous vehicles. The frame in which this research takes its steps is the activity carried on at the Guidance and Navigation Lab of Sapienza – Università di Roma, hosted at the School of Aerospace Engineering. Indeed, the solution proposed has an intrinsic, while not limiting, bias towards possible space applications, that will become obvious in some of the following content. A second bias dictated by the Guidance and Navigation Lab activities is represented by the choice of a sample platform. In fact, it would be difficult to perform a meaningful study keeping it a very general level, independent on the characteristics of the targeted kind of vehicle: it is easy to see from the rough list of applications cited above that these characteristics are extremely varied. The Lab hosted – even before the beginning of this thesis activity – a simple, home-designed and manufactured model of a small, yet performing enough autonomous vehicle, called RAGNO (standing for Rover for Autonomous Guidance Navigation and Observation): it was an obvious choice to select that rover as the reference platform to identify solutions for guidance, and to use it, cooperating to its improvement, for the test activities which should be considered as mandatory in this kind of thesis work to validate the suggested approaches. The draft of the thesis includes four main chapters, plus introduction, final remarks and future perspectives, and the list of references. The first chapter (“Autonomous Guidance Exploiting Stereoscopic Vision”) investigates in detail the technique which has been deemed as the most interesting for small vehicles. The current availability of low cost, high performance cameras suggests the adoption of the stereoscopic vision as a quite effective technique, also capable to making available to remote crew a view of the scenario quite similar to the one humans would have. Several advanced image analysis techniques have been investigated for the extraction of the features from left- and right-eye images, with SURF and BRISK algorithm being selected as the most promising one. In short, SURF is a blob detector with an associated descriptor of 64 elements, where the generic feature is extracted by applying sequential box filters to the surrounding area. The features are then localized in the point of the image where the determinant of the Hessian matrix H(x,y) is maximum. The descriptor vector is than determined by calculating the Haar wavelet response in a sampling pattern centered in the feature. BRISK is instead a corner detector with an associated binary descriptor of 512 bit. The generic feature is identified as the brightest point in a sampling circular area of N pixels while the descriptor vector is calculated by computing the brightness gradient of each of the N(N-1)/2 pairs of sampling points. Once left and right features have been extracted, their descriptors are compared in order to determine the corresponding pairs. The matching criterion consists in seeking for the two descriptors for which their relative distance (Euclidean norm for SURF, Hamming distance for BRISK) is minimum. The matching process is computationally expensive: to reduce the required time the thesis successfully explored the theory of the epipolar geometry, based on the geometric constraint existing between the left and right projection of the scene point P, and indeed limiting the space to be searched. Overall, the selected techniques require between 200 and 300 ms on a 2.4GHz clock CPU for the feature extraction and matching in a single (left+right) capture, making it a feasible solution for slow motion vehicles. Once matching phase has been finalized, a disparity map can be prepared highlighting the position of the identified objects, and by means of a triangulation (the baseline between the two cameras is known, the size of the targeted object is measured in pixels in both images) the position and distance of the obstacles can be obtained. The second chapter (“A Vehicle Prototype and its Guidance System”) is devoted to the implementation of the stereoscopic vision onboard a small test vehicle, which is the previously cited RAGNO rover. Indeed, a description of the vehicle – the chassis, the propulsion system with four electric motors empowering the wheels, the good roadside performance attainable, the commanding options – either fully autonomous, partly autonomous with remote monitoring, or fully remotely controlled via TCP/IP on mobile networks - is included first, with a focus on different sensors that, depending on the scenario, can integrate the stereoscopic vision system. The intelligence-side of guidance subsystem, exploiting the navigation information provided by the camera, is then detailed. Two guidance techniques have been studied and implemented to identify the optimal trajectory in a field with scattered obstacles: the artificial potential guidance, based on the Lyapunov approach, and the A-star algorithm, looking for the minimum of a cost function built on graphs joining the cells of a mesh over-imposed to the scenario. Performance of the two techniques are assessed for two specific test-cases, and the possibility of unstable behavior of the artificial potential guidance, bouncing among local minima, has been highlighted. Overall, A-star guidance is the suggested solution in terms of time, cost and reliability. Notice that, withstanding the noise affecting information from sensors, an estimation process based on Kalman filtering has been also included in the process to improve the smoothness of the targeted trajectory. The third chapter (“Examples of Possible Missions and Applications”) reports two experimental campaigns adopting RAGNO for the detection of dangerous gases. In the first one, the rover accommodates a specific sensor, and autonomously moves in open fields, avoiding possible obstacles, to exploit measurements at given time intervals. The same configuration for RAGNO is also used in the second campaign: this time, however, the path of the rover is autonomously computed on the basis of the way points communicated by a drone which is flying above the area of measurements and identifies possible targets of interest. The fourth chapter (“Guidance of Fleet of Autonomous Vehicles ”) stresses this successful idea of fleet of vehicles, and numerically investigates by algorithms purposely written in Matlab the performance of a simple swarm of two rovers exploring an unknown scenario, pretending – as an example - to represent a case of planetary surface exploration. The awareness of the surrounding environment is dictated by the characteristics of the sensors accommodated onboard, which have been assumed on the basis of the experience gained with the material of previous chapter. Moreover, the communication issues that would likely affect real world cases are included in the scheme by the possibility to model the comm link, and by running the simulation in a multi-task configuration where the two rovers are assigned to two different computer processes, each of them having a different TCP/IP address with a behavior actually depending on the flow of information received form the other explorer. Even if at a simulation-level only, it is deemed that such a final step collects different aspects investigated during the PhD period, with feasible sensors’ characteristics (obviously focusing on stereoscopic vision), guidance technique, coordination among autonomous agents and possible interesting application cases

    Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns.</p> <p>Results</p> <p>Stereoscopic imaging provided non-invasive, automated, simultaneous, <it>in-situ </it>3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. <it>Segmentation </it>was the basis for the <it>stereo matching</it>, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the <it>tracking </it>procedure and <it>triangulated </it>into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time.</p> <p>Conclusions</p> <p>The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that involve active and passive movements of individual agents in densely packed clusters.</p

    Vision-based interaction within a multimodal framework

    Get PDF
    Our contribution is to the field of video-based interaction techniques and is integrated in the home environment of the EMBASSI project. This project addresses innovative methods of man-machine interaction achieved through the development of intelligent assistance and anthropomorphic user interfaces. Within this project, multimodal techniques represent a basic requirement, especially considering those related to the integration of modalities. We are using a stereoscopic approach to allow the natural selection of devices via pointing ges-tures. The pointing hand is segmented from the video images and the 3D position and orientation of the forefinger is calculated. This modality has a subsequent integration with that of speech, in the context of a multimodal interaction infrastructure. In a first phase, we use semantic fusion with amodal input, considering the modalities in a so-called late fusion state

    Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch to Portrait Generation

    Full text link
    Creating the photo-realistic version of people sketched portraits is useful to various entertainment purposes. Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid. In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits from simple contour sketches by involving 3D generative models. Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model. Specifically, our designed region-aware volume rendering strategy and global consistency constraint further enhance detail correspondences during sketch encoding. Moreover, in order to facilitate the usage of layman users, we propose a Contour-to-Sketch module with vector quantized representations, so that easily drawn contours can directly guide the generation of 3D portraits. Extensive comparisons show that our method generates high-quality results that match the sketch. Our usability study verifies that our system is greatly preferred by user.Comment: Project Page on https://hangz-nju-cuhk.github.io

    Obstacle detection for autonomous systems using stereoscopic images and bacterial behaviour

    Get PDF
    This paper presents a low cost strategy for real-time estimation of the position of obstacles in an unknown environment for autonomous robots. The strategy was intended for use in autonomous service robots, which navigate in unknown and dynamic indoor environments. In addition to human interaction, these environments are characterized by a design created for the human being, which is why our developments seek morphological and functional similarity equivalent to the human model. We use a pair of cameras on our robot to achieve a stereoscopic vision of the environment, and we analyze this information to determine the distance to obstacles using an algorithm that mimics bacterial behavior. The algorithm was evaluated on our robotic platform demonstrating high performance in the location of obstacles and real-time operation
    • 

    corecore