1,851 research outputs found

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Statistical and image processing techniques for remote sensing in agricultural monitoring and mapping

    Get PDF
    Throughout most of history, increasing agricultural production has been largely driven by expanded land use, and – especially in the 19th and 20th century – by technological innovation in breeding, genetics and agrochemistry as well as intensification through mechanization and industrialization. More recently, information technology, digitalization and automation have started to play a more significant role in achieving higher productivity with lower environmental impact and reduced use of resources. This includes two trends on opposite scales: precision farming applying detailed observations on sub-field level to support local management, and large-scale agricultural monitoring observing regional patterns in plant health and crop productivity to help manage macroeconomic and environmental trends. In both contexts, remote sensing imagery plays a crucial role that is growing due to decreasing costs and increasing accessibility of both data and means of processing and analysis. The large archives of free imagery with global coverage, can be expected to further increase adoption of remote sensing techniques in coming years. This thesis addresses multiple aspects of remote sensing in agriculture by presenting new techniques in three distinct research topics: (1) remote sensing data assimilation in dynamic crop models; (2) agricultural field boundary detection from remote sensing observations; and (3) contour extraction and field polygon creation from remote sensing imagery. These key objectives are achieved through combining methods of probability analysis, uncertainty quantification, evolutionary learning and swarm intelligence, graph theory, image processing, deep learning and feature extraction. Four new techniques have been developed. Firstly, a new data assimilation technique based on statistical distance metrics and probability distribution analysis to achieve a flexible representation of model- and measurement-related uncertainties. Secondly, a method for detecting boundaries of agricultural fields based on remote sensing observations designed to only rely on image-based information in multi-temporal imagery. Thirdly, an improved boundary detection approach based on deep learning techniques and a variety of image features. Fourthly, a new active contours method called Graph-based Growing Contours (GGC) that allows automatized extractionof complex boundary networks from imagery. The new approaches are tested and evaluated on multiple study areas in the states of Schleswig-Holstein, Niedersachsen and Sachsen-Anhalt, Germany, based on combine harvester measurements, cadastral data and manual mappings. All methods were designed with flexibility and applicability in mind. They proved to perform similarly or better than other existing methods and showed potential for large-scale application and their synergetic use. Thanks to low data requirements and flexible use of inputs, their application is neither constrained to the specific applications presented here nor the use of a specific type of sensor or imagery. This flexibility, in theory, enables their use even outside of the field of remote sensing.Landwirtschaftliche Produktivitätssteigerung wurde historisch hauptsächlich durch Erschließung neuer Anbauflächen und später, insbesondere im 19. und 20. Jahrhundert, durch technologische Innovation in Züchtung, Genetik und Agrarchemie sowie Intensivierung in Form von Mechanisierung und Industrialisierung erreicht. In jüngerer Vergangenheit spielen jedoch Informationstechnologie, Digitalisierung und Automatisierung zunehmend eine größere Rolle, um die Produktivität bei reduziertem Umwelteinfluss und Ressourcennutzung weiter zu steigern. Daraus folgen zwei entgegengesetzte Trends: Zum einen Precision Farming, das mithilfe von Detailbeobachtungen die lokale Feldarbeit unterstützt, und zum anderen großskalige landwirtschaftliche Beobachtung von Bestands- und Ertragsmustern zur Analyse makroökonomischer und ökologischer Trends. In beiden Fällen spielen Fernerkundungsdaten eine entscheidende Rolle und gewinnen dank sinkender Kosten und zunehmender Verfügbarkeit, sowohl der Daten als auch der Möglichkeiten zu ihrer Verarbeitung und Analyse, weiter an Bedeutung. Die Verfügbarkeit großer, freier Archive von globaler Abdeckung werden in den kommenden Jahren voraussichtlich zu einer zunehmenden Verwendung führen. Diese Dissertation behandelt mehrere Aspekte der Fernerkundungsanwendung in der Landwirtschaft und präsentiert neue Methoden zu drei Themenbereichen: (1) Assimilation von Fernerkundungsdaten in dynamischen Agrarmodellen; (2) Erkennung von landwirtschaftlichen Feldgrenzen auf Basis von Fernerkundungsbeobachtungen; und (3) Konturextraktion und Erstellung von Polygonen aus Fernerkundungsaufnahmen. Zur Bearbeitung dieser Zielsetzungen werden verschiedene Techniken aus der Wahrscheinlichkeitsanalyse, Unsicherheitsquantifizierung, dem evolutionären Lernen und der Schwarmintelligenz, der Graphentheorie, dem Bereich der Bildverarbeitung, Deep Learning und Feature-Extraktion kombiniert. Es werden vier neue Methoden vorgestellt. Erstens, eine neue Methode zur Datenassimilation basierend auf statistischen Distanzmaßen und Wahrscheinlichkeitsverteilungen zur flexiblen Abbildung von Modell- und Messungenauigkeiten. Zweitens, eine neue Technik zur Erkennung von Feldgrenzen, ausschließlich auf Basis von Bildinformationen aus multi-temporalen Fernerkundungsdaten. Drittens, eine verbesserte Feldgrenzenerkennung basierend auf Deep Learning Methoden und verschiedener Bildmerkmale. Viertens, eine neue Aktive Kontur Methode namens Graph-based Growing Contours (GGC), die es erlaubt, komplexe Netzwerke von Konturen aus Bildern zu extrahieren. Alle neuen Ansätze werden getestet und evaluiert anhand von Mähdreschermessungen, Katasterdaten und manuellen Kartierungen in verschiedenen Testregionen in den Bundesländern Schleswig-Holstein, Niedersachsen und Sachsen-Anhalt. Alle vorgestellten Methoden sind auf Flexibilität und Anwendbarkeit ausgelegt. Im Vergleich zu anderen Methoden zeigten sie vergleichbare oder bessere Ergebnisse und verdeutlichten das Potenzial zur großskaligen Anwendung sowie kombinierter Verwendung. Dank der geringen Anforderungen und der flexiblen Verwendung verschiedener Eingangsdaten ist die Nutzung nicht nur auf die hier beschriebenen Anwendungen oder bestimmte Sensoren und Bilddaten beschränkt. Diese Flexibilität erlaubt theoretisch eine breite Anwendung, auch außerhalb der Fernerkundung

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Get PDF
    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method

    Storage Capacity Estimation of Commercial Scale Injection and Storage of CO2 in the Jacksonburg-Stringtown Oil Field, West Virginia

    Get PDF
    Geological capture, utilization and storage (CCUS) of carbon dioxide (CO2) in depleted oil and gas reservoirs is one method to reduce greenhouse gas emissions with enhanced oil recovery (EOR) and extending the life of the field. Therefore CCUS coupled with EOR is considered to be an economic approach to demonstration of commercial-scale injection and storage of anthropogenic CO2. Several critical issues should be taken into account prior to injecting large volumes of CO2, such as storage capacity, project duration and long-term containment. Reservoir characterization and 3D geological modeling are the best way to estimate the theoretical CO 2 storage capacity in mature oil fields. The Jacksonburg-Stringtown field, located in northwestern West Virginia, has produced over 22 million barrels of oil (MMBO) since 1895. The sandstone of the Late Devonian Gordon Stray is the primary reservoir.;The Upper Devonian fluvial sandstone reservoirs in Jacksonburg-Stringtown oil field, which has produced over 22 million barrels of oil since 1895, are an ideal candidate for CO2 sequestration coupled with EOR. Supercritical depth (\u3e2500 ft.), minimum miscible pressure (941 psi), favorable API gravity (46.5°) and good water flood response are indicators that facilitate CO 2-EOR operations. Moreover, Jacksonburg-Stringtown oil field is adjacent to a large concentration of CO2 sources located along the Ohio River that could potentially supply enough CO2 for sequestration and EOR without constructing new pipeline facilities.;Permeability evaluation is a critical parameter to understand the subsurface fluid flow and reservoir management for primary and enhanced hydrocarbon recovery and efficient carbon storage. In this study, a rapid, robust and cost-effective artificial neural network (ANN) model is constructed to predict permeability using the model\u27s strong ability to recognize the possible interrelationships between input and output variables. Two commonly available conventional well logs, gamma ray and bulk density, and three logs derived variables, the slope of GR, the slope of bulk density and Vsh were selected as input parameters and permeability was selected as desired output parameter to train and test an artificial neural network. The results indicate that the ANN model can be applied effectively in permeability prediction.;Porosity is another fundamental property that characterizes the storage capability of fluid and gas bearing formations in a reservoir. In this study, a support vector machine (SVM) with mixed kernels function (MKF) is utilized to construct the relationship between limited conventional well log suites and sparse core data. The input parameters for SVM model consist of core porosity values and the same log suite as ANN\u27s input parameters, and porosity is the desired output. Compared with results from the SVM model with a single kernel function, mixed kernel function based SVM model provide more accurate porosity prediction values.;Base on the well log analysis, four reservoir subunits within a marine-dominated estuarine depositional system are defined: barrier sand, central bay shale, tidal channels and fluvial channel subunits. A 3-D geological model, which is used to estimate theoretical CO2 sequestration capacity, is constructed with the integration of core data, wireline log data and geological background knowledge. Depending on the proposed 3-D geological model, the best regions for coupled CCUS-EOR are located in southern portions of the field, and the estimated CO2 theoretical storage capacity for Jacksonburg-Stringtown oil field vary between 24 to 383 million metric tons. The estimation results of CO2 sequestration and EOR potential indicate that the Jacksonburg-Stringtown oilfield has significant potential for CO2 storage and value-added EOR

    Applicability of semi-supervised learning assumptions for gene ontology terms prediction

    Get PDF
    Gene Ontology (GO) is one of the most important resources in bioinformatics, aiming to provide a unified framework for the biological annotation of genes and proteins across all species. Predicting GO terms is an essential task for bioinformatics, but the number of available labelled proteins is in several cases insufficient for training reliable machine learning classifiers. Semi-supervised learning methods arise as a powerful solution that explodes the information contained in unlabelled data in order to improve the estimations of traditional supervised approaches. However, semi-supervised learning methods have to make strong assumptions about the nature of the training data and thus, the performance of the predictor is highly dependent on these assumptions. This paper presents an analysis of the applicability of semi-supervised learning assumptions over the specific task of GO terms prediction, focused on providing judgment elements that allow choosing the most suitable tools for specific GO terms. The results show that semi-supervised approaches significantly outperform the traditional supervised methods and that the highest performances are reached when applying the cluster assumption. Besides, it is experimentally demonstrated that cluster and manifold assumptions are complimentary to each other and an analysis of which GO terms can be more prone to be correctly predicted with each assumption, is provided.Postprint (published version

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation
    corecore