219 research outputs found

    Design of autonomous robotic system for removal of porcupine crab spines

    Get PDF
    Among various types of crabs, the porcupine crab is recognized as a highly potential crab meat resource near the off-shore northwest Atlantic ocean. However, their long, sharp spines make it difficult to be manually handled. Despite the fact that automation technology is widely employed in the commercial seafood processing industry, manual processing methods still dominate in today’s crab processing, which causes low production rates and high manufacturing costs. This thesis proposes a novel robot-based porcupine crab spine removal method. Based on the 2D image and 3D point cloud data captured by the Microsoft Azure Kinect 3D RGB-D camera, the crab’s 3D point cloud model can be reconstructed by using the proposed point cloud processing method. After that, the novel point cloud slicing method and the 2D image and 3D point cloud combination methods are proposed to generate the robot spine removal trajectory. The 3D model of the crab with the actual dimension, robot working cell, and endeffector are well established in Solidworks [1] and imported into the Robot Operating System (ROS) [2] simulation environment for methodology validation and design optimization. The simulation results show that both the point cloud slicing method and the 2D and 3D combination methods can generate a smooth and feasible trajectory. Moreover, compared with the point cloud slicing method, the 2D and 3D combination method is more precise and efficient, which has been validated in the real experiment environment. The automated experiment platform, featuring a 3D-printed end-effector and crab model, has been successfully set up. Results from the experiments indicate that the crab model can be accurately reconstructed, and the central line equations of each spine were calculated to generate a spine removal trajectory. Upon execution with a real robot arm, all spines were removed successfully. This thesis demonstrates the proposed method’s capability to achieve expected results and its potential for application in various manufacturing processes such as painting, polishing, and deburring for parts of different shapes and materials

    A robotic platform for precision agriculture and applications

    Get PDF
    Agricultural techniques have been improved over the centuries to match with the growing demand of an increase in global population. Farming applications are facing new challenges to satisfy global needs and the recent technology advancements in terms of robotic platforms can be exploited. As the orchard management is one of the most challenging applications because of its tree structure and the required interaction with the environment, it was targeted also by the University of Bologna research group to provide a customized solution addressing new concept for agricultural vehicles. The result of this research has blossomed into a new lightweight tracked vehicle capable of performing autonomous navigation both in the open-filed scenario and while travelling inside orchards for what has been called in-row navigation. The mechanical design concept, together with customized software implementation has been detailed to highlight the strengths of the platform and some further improvements envisioned to improve the overall performances. Static stability testing has proved that the vehicle can withstand steep slopes scenarios. Some improvements have also been investigated to refine the estimation of the slippage that occurs during turning maneuvers and that is typical of skid-steering tracked vehicles. The software architecture has been implemented using the Robot Operating System (ROS) framework, so to exploit community available packages related to common and basic functions, such as sensor interfaces, while allowing dedicated custom implementation of the navigation algorithm developed. Real-world testing inside the university’s experimental orchards have proven the robustness and stability of the solution with more than 800 hours of fieldwork. The vehicle has also enabled a wide range of autonomous tasks such as spraying, mowing, and on-the-field data collection capabilities. The latter can be exploited to automatically estimate relevant orchard properties such as fruit counting and sizing, canopy properties estimation, and autonomous fruit harvesting with post-harvesting estimations.Le tecniche agricole sono state migliorate nel corso dei secoli per soddisfare la crescente domanda di aumento della popolazione mondiale. I recenti progressi tecnologici in termini di piattaforme robotiche possono essere sfruttati in questo contesto. Poiché la gestione del frutteto è una delle applicazioni più impegnative, a causa della sua struttura arborea e della necessaria interazione con l'ambiente, è stata oggetto di ricerca per fornire una soluzione personalizzata che sviluppi un nuovo concetto di veicolo agricolo. Il risultato si è concretizzato in un veicolo cingolato leggero, capace di effettuare una navigazione autonoma sia nello scenario di pieno campo che all'interno dei frutteti (navigazione interfilare). La progettazione meccanica, insieme all'implementazione del software, sono stati dettagliati per evidenziarne i punti di forza, accanto ad alcuni ulteriori miglioramenti previsti per incrementarne le prestazioni complessive. I test di stabilità statica hanno dimostrato che il veicolo può resistere a ripidi pendii. Sono stati inoltre studiati miglioramenti per affinare la stima dello slittamento che si verifica durante le manovre di svolta, tipico dei veicoli cingolati. L'architettura software è stata implementata utilizzando il framework Robot Operating System (ROS), in modo da sfruttare i pacchetti disponibili relativi a componenti base, come le interfacce dei sensori, e consentendo al contempo un'implementazione personalizzata degli algoritmi di navigazione sviluppati. I test in condizioni reali all'interno dei frutteti sperimentali dell'università hanno dimostrato la robustezza e la stabilità della soluzione con oltre 800 ore di lavoro sul campo. Il veicolo ha permesso di attivare e svolgere un'ampia gamma di attività agricole in maniera autonoma, come l'irrorazione, la falciatura e la raccolta di dati sul campo. Questi ultimi possono essere sfruttati per stimare automaticamente le proprietà più rilevanti del frutteto, come il conteggio e la calibratura dei frutti, la stima delle proprietà della chioma e la raccolta autonoma dei frutti con stime post-raccolta

    Visual Guidance for Unmanned Aerial Vehicles with Deep Learning

    Full text link
    Unmanned Aerial Vehicles (UAVs) have been widely applied in the military and civilian domains. In recent years, the operation mode of UAVs is evolving from teleoperation to autonomous flight. In order to fulfill the goal of autonomous flight, a reliable guidance system is essential. Since the combination of Global Positioning System (GPS) and Inertial Navigation System (INS) systems cannot sustain autonomous flight in some situations where GPS can be degraded or unavailable, using computer vision as a primary method for UAV guidance has been widely explored. Moreover, GPS does not provide any information to the robot on the presence of obstacles. Stereo cameras have complex architecture and need a minimum baseline to generate disparity map. By contrast, monocular cameras are simple and require less hardware resources. Benefiting from state-of-the-art Deep Learning (DL) techniques, especially Convolutional Neural Networks (CNNs), a monocular camera is sufficient to extrapolate mid-level visual representations such as depth maps and optical flow (OF) maps from the environment. Therefore, the objective of this thesis is to develop a real-time visual guidance method for UAVs in cluttered environments using a monocular camera and DL. The three major tasks performed in this thesis are investigating the development of DL techniques and monocular depth estimation (MDE), developing real-time CNNs for MDE, and developing visual guidance methods on the basis of the developed MDE system. A comprehensive survey is conducted, which covers Structure from Motion (SfM)-based methods, traditional handcrafted feature-based methods, and state-of-the-art DL-based methods. More importantly, it also investigates the application of MDE in robotics. Based on the survey, two CNNs for MDE are developed. In addition to promising accuracy performance, these two CNNs run at high frame rates (126 fps and 90 fps respectively), on a single modest power Graphical Processing Unit (GPU). As regards the third task, the visual guidance for UAVs is first developed on top of the designed MDE networks. To improve the robustness of UAV guidance, OF maps are integrated into the developed visual guidance method. A cross-attention module is applied to fuse the features learned from the depth maps and OF maps. The fused features are then passed through a deep reinforcement learning (DRL) network to generate the policy for guiding the flight of UAV. Additionally, a simulation framework is developed which integrates AirSim, Unreal Engine and PyTorch. The effectiveness of the developed visual guidance method is validated through extensive experiments in the simulation framework

    Droonien autonominen kamerapohjainen navigointi metsässä

    Get PDF
    In recent years, the autonomous flying of drones has been an actively researched topic in both commercial and academic organizations. Most autopilots can fly autonomously in open areas where Global navigation satellite systems (GNSS) are available. However, inside dense forest environments, the localization of the drone cannot rely on GNSS, and the drone also has to avoid obstacles in the path. The objective of this thesis was to design and implement a prototype of an autonomous drone flying under the canopy for boreal forest research purposes. To establish a starting point, a literature survey on available open-source solutions was performed. Based on the literature survey, EGO-Planner-v2 with VINS-Fusion localization and stereo depth camera-based mapping was chosen as the base of the implemented prototype. The system was tested both in a simulator and in real forest environments with custom drone hardware. The performance of the system, and its suitability for boreal forest environments, were evaluated based on the success of the mission, reliability of the obstacle avoidance, and the accuracy of the localization. Based on the results, the performance of the system was promising in sparse forests. In the sparse mixed forest, eight of nine flights were successful, when approximate flight distances varied between 13 m and 18 m. However, in dense forests, the sensing of small needleless branches needs to be improved to increase reliability. In the dense spruce forest, nine of 19 test flights were successful, when approximate flight distances varied between 35 m and 80 m. In the longest, approximately 80 m long, test flight, the error of the VINS-Fusion estimate of the trajectory length was approximately 1 m.Viime vuosina droonien autonominen lentäminen on ollut aktiivisesti tutkittu aihe sekä akateemisissa että kaupallisissa organisaatioissa. Useimmat droonien autopilotit pystyvät lentämään autonomisesti avoimilla alueilla, missä paikannuksessa voidaan hyödyntää globaaleja paikannussatelliitteja (GNSS). Lennettäessä tiheissä metsissä satelliittipaikannus ei kuitenkaan ole mahdollista, ja droonin täytyy lisäksi kyetä väistelemään esteitä. Tämän työn tavoitteena oli toteuttaa metsätutkimukseen tarkoitettu prototyyppi autonomisesti metsän sisällä lentävästä droonista. Työn alussa toteutettiin kirjallisuustutkimus viime aikoina julkaistuihin avoimen lähdekoodin ratkaisuihin metsän sisällä lentävistä autonomisista drooneista. Kirjallisuustutkimuksen perusteella toteutettavan prototyypin pohjaksi valittiin EGO-Planner-v2, joka käytti paikannukseen VINS-Fusionia ja kartoitukseen stereo-syvyyskameraa. Systeemiä testattiin sekä simulaattorissa että oikeissa metsissä itserakennetulla droonilla. Suorituskykyä ja soveltuvuutta pohjoisiin metsäympäristöihin arvioitiin lentojen onnistumisen, esteiden väistelyn luotettavuuden ja paikannuksen tarkkuuden perusteella. Tulosten perusteella suorituskyky oli lupaava puustoltaan harvoissa metsissä. Puustoltaan harvassa sekametsässä kahdeksan yhdeksästä testilennosta onnistui, kun testilentojen pituudet vaihtelivat noin 13 metristä noin 18 metriin. Kuitenkin tiheissä metsissä pienten havuttomien oksien havainnointia täytyy kehittää navigoinnin luotettavuuden parantamiseksi. Tiheässä kuusimetsässä yhdeksän 19 testilennosta onnistui, kun lentojen pituudet vaihtelivat noin 35 metristä 80 metriin. Pisimmässä, noin 80 metriä pitkässä, testilennossa VINS-Fusionin estimaatissa lennon pituudeksi virhe oli noin yksi metri

    Creating and manipulating 3D paths with mixed reality spatial interfaces

    Get PDF
    Mixed reality offers unique opportunities to situate complex tasks within spatial environments. One such task is the creation and manipulation of intricate, three-dimensional paths, which remains a crucial challenge in many fields, including animation, architecture, and robotics. This paper presents an investigation into the possibilities of spatially situated path creation using new virtual and augmented reality technologies and examines how these technologies can be leveraged to afford more intuitive and natural path creation. We present a formative study (n = 20) evaluating an initial path planning interface situated in the context of augmented reality and human-robot interaction. Based on the findings of this study, we detail the development of two novel techniques for spatially situated path planning and manipulation that afford intuitive, expressive path creation at varying scales. We describe a comprehensive user study (n = 36) investigating the effectiveness, learnability, and efficiency of both techniques when paired with a range of canonical placement strategies. The results of this study confirm the usability of these interaction metaphors and provide further insight into how spatial interaction can be discreetly leveraged to enable interaction at scale. Overall, this work contributes to the development of 3DUIs that expand the possibilities for situating path-driven tasks in spatial environments

    Digital Inclusion of the Farming Sector Using Drone Technology

    Get PDF
    Agriculture continues to be the primary source of income for most rural people in the developing economy. The world’s economy is also strongly reliant on agricultural products, which accounts for a large number of its exports. Despite its growing importance, agriculture is still lagging behind to meet the demands due to crop failure caused by bad weather conditions and unmanaged insect problems. As a result, the quality and quantity of agricultural products are occasionally affected to reduce the farm income. Crop failure could be predicted ahead of time and preventative measures could be taken through a combination of conventional farming practices with contemporary technologies such as agri-drones to address the difficulties plaguing the agricultural sectors. Drones are actually unmanned aerial vehicles that are used for imaging, soil and crop surveillance, and a variety of other purposes in agricultural sectors. Drone technology is now becoming an emerging technology for large-scale applications in agriculture. Although the technology is still in its infancy in developing nations, numerous research and businesses are working to make it easily accessible to the farming community to boost the agricultural productivity

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)

    A UAV-ENABLED CALIBRATION METHOD FOR REMOTE CAMERAS ROBUST TO LOCALIZATION UNCERTAINTY

    Get PDF
    Several video applications rely on camera calibration, a key enabler towards the measurement of metric parameters from images. For instance, monitoring environmental changes through remote cameras, such as glacier size changes, or measuring vehicle speed from security cameras, require cameras to be calibrated. Calibrating a camera is necessary to implement accurate computer vision techniques for the automated analysis of video footage. This automated analysis enables the ability to save cost and time in a variety of fields, such as manufacturing, civil engineering, architecture and safety. The large number of cameras installed and operated continues to increase. A vast portion of these cameras are ”hard-to-reach” cameras. ”Hard-to-reach” cameras refer to installed cameras that cannot be removed from their location without impacting the camera parameters or the camera’s operational use. This includes remote sensing cameras or security cameras. Many of these cameras are not calibrated, and successfully being able to calibrate them is a key need as applications continue growing for the use of automated measurements using the video provided by the cameras. Existing calibration methods can be divided into two groups: object-based calibration, which relies on the use of a calibration target of known dimensions, and self-calibration, which relies on the camera motion or scene geometry constraints. However, these methods have not been adapted for use with remote cameras that are hard-to-reach and have large field-of-views. Indeed, the object-based calibration method requires a tedious and manual process that is not adapted to a large field of view. Furthermore, the self-calibration requires restricted conditions to work correctly and is thus not scalable to a large type of hard-to-reach cameras, with many different parameters, and various viewing scenes. Based on this need, the research objective of this thesis is to develop a camera calibration method for hard-to-reach cameras. The method must satisfy a series of requirements caused by the remote status of the cameras being calibrated: • Be adapted to large fields-of-view since these cameras cannot be accessed easily (which prevents the use of object-based calibration techniques) • Be scalable to various environments (which is not feasible using self-calibration techniques that require strict assumptions about the scene) • Be automated to enable the calibration of the large number of already installed cameras • Be able to correct for the large non-linear distortion that is frequently present with these cameras In response to the calibration need, this thesis proposes a solution that relies on the use of a drone or a robot as a moving target to collect the 3D and 2D matching points required for the calibration. The target localization in the 3D space and on the image is subject to errors, and the approach must be tested to evaluate its ability to calibrate cameras despite measurement uncertainties. This work demonstrates the success of the calibration approach using realistic simulations and real-world testing. The approach is robust against localization uncertainties. It is also environment independent, and highly automated, on the contrary to existing calibration techniques. First, this work defines a drone trajectory that covers the entire field of view and enables a robust correspondence between 3D and 2D key points. The corresponding experiment evaluates the calibration quality while the 2D localization is subject to uncertainties. It demonstrates using simulations for several cameras that the use of the moving target following this trajectory enables the collection of a complete training set, and results in an accurate calibration with an RMS reprojection error of 3.2 pixels on average. This error is smaller than 3.6 pixels which is a threshold derived in this thesis, and which corresponds to an accurate calibration. Then, the drone design is modified to add a marker to improve the target detection accuracy. Experiment 2 demonstrates the robustness of this solution in challenging conditions, such as in complex environments for the target detection. The modified drone design leads to improvement in calibration accuracy with an RMS reprojection error of 2.4 pixels on average, and is adapted for detection despite backgrounds or flight conditions that introduce complication in the target detection. This research also develops a strategy to evaluate the impact of camera parameters, drone path parameters, and 3D and 2D localization uncertainties on the calibration accuracy. Applying this strategy to 5000 simulated camera models leads to recommendations for path parameters for the drone-based calibration approach and highlights the impact of camera parameters on the calibration accuracy. It demonstrates that specific sampling step lengths lead to a better calibration, and demonstrates the relationship between the drone-camera distance and the accuracy. This experiment results in recommendations for the drone path. It also evaluated the RMS reprojection error for the 5000 cameras. The average of this error is equal to 4 pixels. Linking this result to the speed measurement application, 4 pixels error corresponds to a speed measurement error smaller than 0.5km/h when measuring the speed of a vehicle 15 meters away using a pinhole camera of focal length 900 pixels. The knowledge gained from these experiments is applied in a real-world test, which completes the demonstration of the drone-based camera calibration approach. The real test is made using a commercial drone and GPS, in an urban environment and in a challenging background. This hardware experiment shows the steps to follow to reproduce the drone-based remote camera calibration technique. The calibration error equals 7.7 pixels, and can be reduced if a RTK GPS is used as 3D localization sensor. Finally, this work demonstrates using an optimization process for several simulated cameras that the sampling size can be reduced by more than half for a faster calibration while maintaining a good calibration accuracy.Ph.D

    Evaluation of Laban Effort Features based on the Social Attributes and Personality of Domestic Service Robots

    Get PDF
    Today, it is not uncommon to see robots adopted in various domains and environments. From manufacturing facilities to households, robots take over several roles and tasks. For instance, the adoption of robotic vacuum cleaners has drastically increased in the recent decades. During their interaction with these embodied autonomous agents, humans tend to ascribe certain personality traits to them, even when the robot has a mechanoid appearance and very low degree-of-freedom. As the social capabilities and the persuasiveness of robots increase, design of robots with certain personality traits will become a significant design problem. The current advancements in AI and robotics will led to development of more realistic and persuasive robots in the foreseeable future. For this, it is crucial to understand people’s judgment of the robots’ social attributes since the findings can shape the future of personality and behavior design for social robots. Therefore, using only a simple and mono-functional robotic vacuum cleaner, this study aims to investigate the impact of expressive motions on how people perceive the social attributes and personality of the robot. To investigate this, the framework of Laban Effort Features was modified to fit the needs and constraints of a robotic vacuum cleaner. Expressive motions were designed for a simple cleaning task performed by iRobot’s Create2. The four movement features that have been controlled for robot include path planning behavior, radius of curvature at rotational turns, velocity, and vacuum power. Next, participants were asked to rate the personality and social attributes of the robot under several treatment conditions using a video-based online survey. Participants were recruited through the crowd-sourcing platform, Amazon Mechanical Turk. The results indicated that people’s ratings of personality and social attributes of the robot were influenced by the robot’s movement features. For social attributes, there were two main findings. First, velocity influenced robot’s ratings of warmth and competence. Second, path planning behavior influenced robot’s ratings of competence and discomfort. In terms of robot personality, the results indicated three main findings. First, random path planning behavior was associated with higher Neuroticism ratings. Second, high velocity yielded higher Agreeableness ratings. Third, vacuum power with higher duty cycle yielded higher Agreeableness and Conscientiousness ratings. In conclusion, this study showed the framework of Laban Effort Features can be applied to fit the cleaning task of a domestic service robot, and that the framework’s application makes a difference in how humans perceive the personality and social attributes of the robot. Overall, the findings should be considered in human-robot interaction when incorporating expressive motions and affective behavior into domestic service robots
    • …
    corecore