7 research outputs found

    Robotic bin-picking: Benchmarking robotics grippers with modified YCB object and model set

    Get PDF
    Robotic bin-picking is increasingly important in the order-picking process in intralogistics. However, many aspects of the robotic bin-picking process (object detection, grasping, manipulation) still require the research community\u27s attention. Established methods are used to test robotic grippers, enabling comparability of the research community\u27s results. This study presents a modified YCB Robotic Gripper Assessment Protocol that was used to evaluate the performance of four robotic grippers (two-fingered, vacuum, gecko, and soft gripper). During the testing, 45 objects from the modified YCB Object and Model Set from the packaging categories, tools, small objects, spherical objects, and deformable objects were grasped and manipulated. The results of the robotic gripper evaluation show that while some robotic grippers performed substantially well, there is an expressive grasp success variation over diverse objects. The results indicate that selecting the object grasp point next to selecting the most suitable robotic gripper is critical in successful object grasping. Therefore, we propose grasp point determination using mechanical software simulation with a model of a two-fingered gripper in an ADAMS/MATLAB co-simulation. Performing software simulations for this task can save time and give comparable results to real-world experiments

    Application of reinforcement learning in robotic disassembly operations

    Get PDF
    Disassembly is a key step in remanufacturing. To increase the level of automation in disassembly, it is necessary to use robots that can learn to perform new tasks by themselves rather than having to be manually reprogrammed every time there is a different job. Reinforcement Learning (RL) is a machine learning technique that enables the robots to learn by trial and error rather than being explicitly programmed. In this thesis, the application of RL to robotic disassembly operations has been studied. Firstly, a literature review on robotic disassembly and the application of RL in contact-rich tasks has been conducted in Chapter 2. To physically implement RL in robotic disassembly, the task of removing a bolt from a door chain lock has been selected as a case study, and a robotic training platform has been built for this implementation in Chapter 3. This task is chosen because it can demonstrate the capabilities of RL to pathfinding and dealing with reaction forces without explicitly specifying the target coordinates or building a force feedback controller. The robustness of the learned policies against the imprecision of the robot is studied by a proposed method to actively lower the precision of the robots. It has been found that the robot can learn successfully even when the precision is lowered to as low as ±0.5mm. This work also investigates whether learned policies can be transferred among robots with different precisions. Experiments have been performed by training a robot with a certain precision on a task and replaying the learned skills on a robot with different precision. It has been found that skills learned by a low-precision robot can perform better on a robot with higher precision, and skills learned by a high-precision robot have worse performance on robots with lower precision, as it is suspected that the policies trained on high-precision robots have been overfitted to the precise robots. In Chapter 4, the approach of using a digital-twin-assisted simulation-to-reality transfer to accelerate the learning performance of the RL has been investigated. To address the issue of identifying the system parameters, such as the stiffness and damping of the contact models, that are difficult to measure directly but are critical for building the digital twins of the environments, system identification method is used to minimise the discrepancy between the response generated from the physical and digital environments by using the Bees Algorithm. It is found that the proposed method effectively increases RL's learning performance. It is also found that it is possible to have worse performance with the sim-to-real transfer if the reality gap is not effectively addressed. However, increasing the size of the dataset and optimisation cycles have been demonstrated to reduce the reality gap and lead to successful sim-to-real transfers. Based on the training task described in Chapters 4 and 5, a full factorial study has been conducted to identify patterns when selecting the appropriate hyper-parameters when applying the Deep Deterministic Policy Gradient (DDPG) algorithm to the robotic disassembly task. Four hyper-parameters that directly influence the decision-making Artificial Neural Network (ANN) update have been chosen for the study, with three levels assigned to each hyper-parameter. After running 241 simulations, it is found that for this particular task, the learning rates of the actor and critic networks are the most influential hyper-parameters, while the batch size and soft update rate have relatively limited influence. Finally, the thesis is concluded in Chapter 6 with a summary of findings and suggested future research directions

    Learning-based robotic manipulation for dynamic object handling : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Mechatronic Engineering at the School of Food and Advanced Technology, Massey University, Turitea Campus, Palmerston North, New Zealand

    Get PDF
    Figures are re-used in this thesis with permission of their respective publishers or under a Creative Commons licence.Recent trends have shown that the lifecycles and production volumes of modern products are shortening. Consequently, many manufacturers subject to frequent change prefer flexible and reconfigurable production systems. Such schemes are often achieved by means of manual assembly, as conventional automated systems are perceived as lacking flexibility. Production lines that incorporate human workers are particularly common within consumer electronics and small appliances. Artificial intelligence (AI) is a possible avenue to achieve smart robotic automation in this context. In this research it is argued that a robust, autonomous object handling process plays a crucial role in future manufacturing systems that incorporate robotics—key to further closing the gap between manual and fully automated production. Novel object grasping is a difficult task, confounded by many factors including object geometry, weight distribution, friction coefficients and deformation characteristics. Sensing and actuation accuracy can also significantly impact manipulation quality. Another challenge is understanding the relationship between these factors, a specific grasping strategy, the robotic arm and the employed end-effector. Manipulation has been a central research topic within robotics for many years. Some works focus on design, i.e. specifying a gripper-object interface such that the effects of imprecise gripper placement and other confounding control-related factors are mitigated. Many universal robotic gripper designs have been considered, including 3-fingered gripper designs, anthropomorphic grippers, granular jamming end-effectors and underactuated mechanisms. While such approaches have maintained some interest, contemporary works predominantly utilise machine learning in conjunction with imaging technologies and generic force-closure end-effectors. Neural networks that utilise supervised and unsupervised learning schemes with an RGB or RGB-D input make up the bulk of publications within this field. Though many solutions have been studied, automatically generating a robust grasp configuration for objects not known a priori, remains an open-ended problem. An element of this issue relates to a lack of objective performance metrics to quantify the effectiveness of a solution—which has traditionally driven the direction of community focus by highlighting gaps in the state-of-the-art. This research employs monocular vision and deep learning to generate—and select from—a set of hypothesis grasps. A significant portion of this research relates to the process by which a final grasp is selected. Grasp synthesis is achieved by sampling the workspace using convolutional neural networks trained to recognise prospective grasp areas. Each potential pose is evaluated by the proposed method in conjunction with other input modalities—such as load-cells and an alternate perspective. To overcome human bias and build upon traditional metrics, scores are established to objectively quantify the quality of an executed grasp trial. Learning frameworks that aim to maximise for these scores are employed in the selection process to improve performance. The proposed methodology and associated metrics are empirically evaluated. A physical prototype system was constructed, employing a Dobot Magician robotic manipulator, vision enclosure, imaging system, conveyor, sensing unit and control system. Over 4,000 trials were conducted utilising 100 objects. Experimentation showed that robotic manipulation quality could be improved by 10.3% when selecting to optimise for the proposed metrics—quantified by a metric related to translational error. Trials further demonstrated a grasp success rate of 99.3% for known objects and 98.9% for objects for which a priori information is unavailable. For unknown objects, this equated to an improvement of approximately 10% relative to other similar methodologies in literature. A 5.3% reduction in grasp rate was observed when removing the metrics as selection criteria for the prototype system. The system operated at approximately 1 Hz when contemporary hardware was employed. Experimentation demonstrated that selecting a grasp pose based on the proposed metrics improved grasp rates by up to 4.6% for known objects and 2.5% for unknown objects—compared to selecting for grasp rate alone. This project was sponsored by the Richard and Mary Earle Technology Trust, the Ken and Elizabeth Powell Bursary and the Massey University Foundation. Without the financial support provided by these entities, it would not have been possible to construct the physical robotic system used for testing and experimentation. This research adds to the field of robotic manipulation, contributing to topics on grasp-induced error analysis, post-grasp error minimisation, grasp synthesis framework design and general grasp synthesis. Three journal publications and one IEEE Xplore paper have been published as a result of this research

    Real-time object detection using monocular vision for low-cost automotive sensing systems

    Get PDF
    This work addresses the problem of real-time object detection in automotive environments using monocular vision. The focus is on real-time feature detection, tracking, depth estimation using monocular vision and finally, object detection by fusing visual saliency and depth information. Firstly, a novel feature detection approach is proposed for extracting stable and dense features even in images with very low signal-to-noise ratio. This methodology is based on image gradients, which are redefined to take account of noise as part of their mathematical model. Each gradient is based on a vector connecting a negative to a positive intensity centroid, where both centroids are symmetric about the centre of the area for which the gradient is calculated. Multiple gradient vectors define a feature with its strength being proportional to the underlying gradient vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows superior performance over other contemporary detectors in terms of keypoint density, tracking accuracy, illumination invariance, rotation invariance, noise resistance and detection time. The DeGraF features form the basis for two new approaches that perform dense 3D reconstruction from a single vehicle-mounted camera. The first approach tracks DeGraF features in real-time while performing image stabilisation with minimal computational cost. This means that despite camera vibration the algorithm can accurately predict the real-world coordinates of each image pixel in real-time by comparing each motion-vector to the ego-motion vector of the vehicle. The performance of this approach has been compared to different 3D reconstruction methods in order to determine their accuracy, depth-map density, noise-resistance and computational complexity. The second approach proposes the use of local frequency analysis of i ii gradient features for estimating relative depth. This novel method is based on the fact that DeGraF gradients can accurately measure local image variance with subpixel accuracy. It is shown that the local frequency by which the centroid oscillates around the gradient window centre is proportional to the depth of each gradient centroid in the real world. The lower computational complexity of this methodology comes at the expense of depth map accuracy as the camera velocity increases, but it is at least five times faster than the other evaluated approaches. This work also proposes a novel technique for deriving visual saliency maps by using Division of Gaussians (DIVoG). In this context, saliency maps express the difference of each image pixel is to its surrounding pixels across multiple pyramid levels. This approach is shown to be both fast and accurate when evaluated against other state-of-the-art approaches. Subsequently, the saliency information is combined with depth information to identify salient regions close to the host vehicle. The fused map allows faster detection of high-risk areas where obstacles are likely to exist. As a result, existing object detection algorithms, such as the Histogram of Oriented Gradients (HOG) can execute at least five times faster. In conclusion, through a step-wise approach computationally-expensive algorithms have been optimised or replaced by novel methodologies to produce a fast object detection system that is aligned to the requirements of the automotive domain

    The Evolution of Smart Buildings: An Industrial Perspective of the Development of Smart Buildings in the 2010s

    Get PDF
    Over the course of the 2010s, specialist research bodies have failed to provide a holistic view of the changes in the prominent reason (as driven by industry) for creating a smart building. Over the 2010s, research tended to focus on remaining deeply involved in only single issues or value drivers. Through an analysis of the author’s peer reviewed and published works (book chapters, articles, essays and podcasts), supplemented with additional contextual academic literature, a model for how the key drivers for creating a smart building have evolved in industry during the 2010s is presented. The critical research commentary within this thesis, tracks the incremental advances of technology and their application to the built environment via academic movements, industrial shifts, or the author’s personal contributions. This thesis has found that it is demonstrable, through the chronology and publication dates of the included research papers, that as the financial cost and complexity of sensors and cloud computing reduced, smart buildings became increasingly prevalent. Initially, sustainability was the primary focus with the use of HVAC analytics and advanced metering in the early 2010s. The middle of the decade saw an economic transformation of the commercial office sector and the driver for creating a smart building was concerned with delivering flexible yet quantifiably used space. Driven by society’s emphasis on health, wellbeing and productivity, smart buildings pivoted their focus towards the end of the 2010s. Smart building technologies were required to demonstrate the impacts of architecture on the human. This research has evidenced that smart buildings use data to improve performance in sustainability, in space usage or for humancentric outcomes

    Library websites popularity: does Facebook really matter?

    Get PDF
    The purpose of this paper is to determine whether the utilization of social media (Facebook) is an important factor in increasing the visibility of the library site usage in Malaysian public universities. Nine top ranked Malaysian public universities involved in this research and number of Facebook followers for each library website is listed. Alexa software was used as the approach to study the issue of visibility. Alexa is able to determine web site usage, by showing the percentage of visitors of library related subdomain(s) as listed in the top subdomains for each University website (domain) over a month. It is found that Universiti Utara Malaysia library website scored the highest percentage of visitors based on the library related subdomain(s) as listed in the top subdomains for the University website in Alexa. To check such irregularities in access, this paper use EvalAccess 2.0 and it is found that Universiti Sains Malaysia’s library website scored higher irregularities. In term of number of Facebook followers, Univesity of Malaya library has the highest score. It is showed that the utilization of social media (Facebook) is not yet an important factor in increasing the visibility of the library websites. However, expectedly, top ranked universities’ library web sites, are more visible and popular. This research is limited to the situation in Malaysia where public universities are more noticeable and seldom face financial constraints rather than private universities. It is highly important for those universities’ library web sites that are not highly visible to initiate the necessary measures in improving the development of their web sites as the usage of the website is an indicator of online quality
    corecore