478 research outputs found

    Safe and accurate MAV Control, navigation and manipulation

    Get PDF
    This work focuses on the problem of precise, aggressive and safe Micro Aerial Vehicle (MAV) navigation as well as deployment in applications which require physical interaction with the environment. To address these issues, we propose three different MAV model based control algorithms that rely on the concept of receding horizon control. As a starting point, we present a computationally cheap algorithm which utilizes an approximate linear model of the system around hover and is thus maximally accurate for slow reference maneuvers. Aiming at overcoming the limitations of the linear model parameterisation, we present an extension to the first controller which relies on the true nonlinear dynamics of the system. This approach, even though computationally more intense, ensures that the control model is always valid and allows tracking of full state aggressive trajectories. The last controller addresses the topic of aerial manipulation in which the versatility of aerial vehicles is combined with the manipulation capabilities of robotic arms. The proposed method relies on the formulation of a hybrid nonlinear MAV-arm model which also takes into account the effects of contact with the environment. Finally, in order to enable safe operation despite the potential loss of an actuator, we propose a supervisory algorithm which estimates the health status of each motor. We further showcase how this can be used in conjunction with the nonlinear controllers described above for fault tolerant MAV flight. While all the developed algorithms are formulated and tested using our specific MAV platforms (consisting of underactuated hexacopters for the free flight experiments, hexacopter-delta arm system for the manipulation experiments), we further discuss how these can be applied to other underactuated/overactuated MAVs and robotic arm platforms. The same applies to the fault tolerant control where we discuss different stabilisation techniques depending on the capabilities of the available hardware. Even though the primary focus of this work is on feedback control, we thoroughly describe the custom hardware platforms used for the experimental evaluation, the state estimation algorithms which provide the basis for control as well as the parameter identification required for the formulation of the various control models. We showcase all the developed algorithms in experimental scenarios designed to highlight the corresponding strengths and weaknesses as well as show that the proposed methods can run in realtime on commercially available hardware.Open Acces

    Recent Advances in Social Data and Artificial Intelligence 2019

    Get PDF
    The importance and usefulness of subjects and topics involving social data and artificial intelligence are becoming widely recognized. This book contains invited review, expository, and original research articles dealing with, and presenting state-of-the-art accounts pf, the recent advances in the subjects of social data and artificial intelligence, and potentially their links to Cyberspace

    Machine Learning Modeling for Image Segmentation in Manufacturing and Agriculture Applications

    Get PDF
    Doctor of PhilosophyDepartment of Industrial & Manufacturing Systems EngineeringShing I ChangThis dissertation focuses on applying machine learning (ML) modelling for image segmentation tasks of various applications such as additive manufacturing monitoring, agricultural soil cover classification, and laser scribing quality control. The proposed ML framework uses various ML models such as gradient boosting classifier and deep convolutional neural network to improve and automate image segmentation tasks. In recent years, supervised ML methods have been widely adopted for imaging processing applications in various industries. The presence of cameras installed in production processes has generated a vast amount of image data that can potentially be used for process monitoring. Specifically, deep supervised machine learning models have been successfully implemented to build automatic tools for filtering and classifying useful information for process monitoring. However, successful implementations of deep supervised learning algorithms depend on several factors such as distribution and size of training data, selected ML models, and consistency in the target domain distribution that may change based on different environmental conditions over time. The proposed framework takes advantage of general-purposed, trained supervised learning models and applies them for process monitoring applications related to manufacturing and agriculture. In Chapter 2, a layer-wise framework is proposed to monitor the quality of 3D printing parts based on top-view images. The proposed statistical process monitoring method starts with self-start control charts that require only two successful initial prints. Unsupervised machine learning methods can be used for problems in which high accuracy is not required, but statistical process monitoring usually demands high classification accuracies to avoid Type I and II errors. Answering the challenges of image processing using unsupervised methods due to lighting, a supervised Gradient Boosting Classifier (GBC) with 93 percent accuracy is adopted to classify each printed layer from the printing bed. Despite the power of GBC or other decision-tree-based ML models to comparable to unsupervised ML models, their capability is limited in terms of accuracy and running time for complex classification problems such as soil cover classification. In Chapter 3, a deep convolutional neural network (DCNN) for semantic segmentation is trained to quantify and monitor soil coverage in agricultural fields. The trained model is capable of accurately quantifying green canopy cover, counting plants, and classifying stubble. Due to the wide variety of scenarios in a real agricultural field, 3942 high-resolution images were collected and labeled for training and test data set. The difficulty and hardship of collecting, cleaning, and labeling the mentioned dataset was the motivation to find a better approach to alleviate data-wrangling burden for any ML model training. One of the most influential factors is the need for a high volume of labeled data from an exact problem domain in terms of feature space and distributions of data of all classes. Image data preparation for deep learning model training is expensive in terms of the time for labelling due to tedious manual processing. Multiple human labelers can work simultaneously but inconsistent labeling will generate a training data set that often compromises model performance. In addition, training a ML model for a complication problem from scratch will also demand vast computational power. One of the potential approaches for alleviating data wrangling challenges is transfer learning (TL). In Chapter 4, a TL approach was adopted for monitoring three laser scribing characteristics – scribe width, straightness, and debris to answer these challenges. The proposed transfer deep convolutional neural network (TDCNN) model can reduce timely and costly processing of data preparation. The proposed framework leverages a deep learning model already trained for a similar problem and only uses 21 images generated gleaned from the problem domain. The proposed TDCNN overcame the data challenge by leveraging the DCNN model called VGG16 already trained for basic geometric features using more than two million pictures. Appropriate image processing techniques were provided to measure scribe width and line straightness as well as total scribe and debris area using classified images with 96 percent accuracy. In addition to the fact that the TDCNN is functioning with less trainable parameters (i.e., 5 million versus 15 million for VGG16), increasing training size to 154 did not provide significant improvement in accuracy that shows the TDCNN does not need high volume of data to be successful. Finally, chapter 5 summarizes the proposed work and lays out the topics for future research

    Advanced Occupancy Measurement Using Sensor Fusion

    Get PDF
    With roughly about half of the energy used in buildings attributed to Heating, Ventilation, and Air conditioning (HVAC) systems, there is clearly great potential for energy saving through improved building operations. Accurate knowledge of localised and real-time occupancy numbers can have compelling control applications for HVAC systems. However, existing technologies applied for building occupancy measurements are limited, such that a precise and reliable occupant count is difficult to obtain. For example, passive infrared (PIR) sensors commonly used for occupancy sensing in lighting control applications cannot differentiate between occupants grouped together, video sensing is often limited by privacy concerns, atmospheric gas sensors (such as CO2 sensors) may be affected by the presence of electromagnetic (EMI) interference, and may not show clear links between occupancy and sensor values. Past studies have indicated the need for a heterogeneous multi-sensory fusion approach for occupancy detection to address the short-comings of existing occupancy detection systems. The aim of this research is to develop an advanced instrumentation strategy to monitor occupancy levels in non-domestic buildings, whilst facilitating the lowering of energy use and also maintaining an acceptable indoor climate. Accordingly, a novel multi-sensor based approach for occupancy detection in open-plan office spaces is proposed. The approach combined information from various low-cost and non-intrusive indoor environmental sensors, with the aim to merge advantages of various sensors, whilst minimising their weaknesses. The proposed approach offered the potential for explicit information indicating occupancy levels to be captured. The proposed occupancy monitoring strategy has two main components; hardware system implementation and data processing. The hardware system implementation included a custom made sound sensor and refinement of CO2 sensors for EMI mitigation. Two test beds were designed and implemented for supporting the research studies, including proof-of-concept, and experimental studies. Data processing was carried out in several stages with the ultimate goal being to detect occupancy levels. Firstly, interested features were extracted from all sensory data collected, and then a symmetrical uncertainty analysis was applied to determine the predictive strength of individual sensor features. Thirdly, a candidate features subset was determined using a genetic based search. Finally, a back-propagation neural network model was adopted to fuse candidate multi-sensory features for estimation of occupancy levels. Several test cases were implemented to demonstrate and evaluate the effectiveness and feasibility of the proposed occupancy detection approach. Results have shown the potential of the proposed heterogeneous multi-sensor fusion based approach as an advanced strategy for the development of reliable occupancy detection systems in open-plan office buildings, which can be capable of facilitating improved control of building services. In summary, the proposed approach has the potential to: (1) Detect occupancy levels with an accuracy reaching 84.59% during occupied instances (2) capable of maintaining average occupancy detection accuracy of 61.01%, in the event of sensor failure or drop-off (such as CO2 sensors drop-off), (3) capable of utilising just sound and motion sensors for occupancy levels monitoring in a naturally ventilated space, (4) capable of facilitating potential daily energy savings reaching 53%, if implemented for occupancy-driven ventilation control

    Artificial cognitive architecture with self-learning and self-optimization capabilities. Case studies in micromachining processes

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura : 22-09-201

    Intrinsically Evolvable Artificial Neural Networks

    Get PDF
    Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    The 2nd International Electronic Conference on Applied Sciences

    Get PDF
    This book is focused on the works presented at the 2nd International Electronic Conference on Applied Sciences, organized by Applied Sciences from 15 to 31 October 2021 on the MDPI Sciforum platform. Two decades have passed since the start of the 21st century. The development of sciences and technologies is growing ever faster today than in the previous century. The field of science is expanding, and the structure of science is becoming ever richer. Because of this expansion and fine structure growth, researchers may lose themselves in the deep forest of the ever-increasing frontiers and sub-fields being created. This international conference on the Applied Sciences was started to help scientists conduct their own research into the growth of these frontiers by breaking down barriers and connecting the many sub-fields to cut through this vast forest. These functions will allow researchers to see these frontiers and their surrounding (or quite distant) fields and sub-fields, and give them the opportunity to incubate and develop their knowledge even further with the aid of this multi-dimensional network

    First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    Get PDF
    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered
    • …
    corecore