968 research outputs found

    Aprendizado ativo baseado em atributos contextuais de superpixel para classificação de imagem de sensoriamento remoto

    Get PDF
    Orientadores: Alexandre Xavier Falcão, Jefersson Alex dos SantosDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Recentemente, técnicas de aprendizado de máquina têm sido propostas para criar mapas temáticos a partir de imagens de sensoriamento remoto. Estas técnicas podem ser divididas em métodos de classificação baseados em pixels ou regiões. Este trabalho concentra-se na segunda abordagem, uma vez que estamos interessados em imagens com milhões de pixels e a segmentação da imagem em regiões (superpixels) pode reduzir consideravelmente o número de amostras a serem classificadas. Porém, mesmo utilizando superpixels, o número de amostras ainda é grande para anotá-las manualmente e treinar o classificador. As técnicas de aprendizado ativo propostas resolvem este problema começando pela seleção de um conjunto pequeno de amostras selecionadas aleatoriamente. Tais amostras são anotadas manualmente e utilizadas para treinar a primeira instância do classificador. Em cada iteração do ciclo de aprendizagem, o classificador atribui rótulos e seleciona as amostras mais informativas para a correção/confirmação pelo usuário, aumentando o tamanho do conjunto de treinamento. A instância do classificador é melhorada no final de cada iteração pelo seu treinamento e utilizada na iteração seguinte até que o usuário esteja satisfeito com o classificador. Observamos que a maior parte dos métodos reclassificam o conjunto inteiro de dados em cada iteração do ciclo de aprendizagem, tornando este processo inviável para interação com o usuário. Portanto, enderaçamos dois problemas importantes em classificação baseada em regiões de imagens de sensoriamento remoto: (a) a descrição efetiva de superpixels e (b) a redução do tempo requerido para seleção de amostras em aprendizado ativo. Primeiro, propusemos um descritor contextual de superpixels baseado na técnica de sacola de palavras, que melhora o resultado de descritores de cor e textura amplamente utilizados. Posteriormente, propusemos um método supervisionado de redução do conjunto de dados que é baseado em um método do estado da arte em aprendizado ativo chamado Multi-Class Level Uncertainty (MCLU). Nosso método mostrou-se tão eficaz quanto o MCLU e ao mesmo tempo consideravelmente mais eficiente. Adicionalmente, melhoramos seu desempenho por meio da aplicação de um processo de relaxação no mapa de classificação, utilizando Campos Aleatórios de MarkovAbstract: In recent years, machine learning techniques have been proposed to create classification maps from remote sensing images. These techniques can be divided into pixel- and region-based image classification methods. This work concentrates on the second approach, since we are interested in images with millions of pixels and the segmentation of the image into regions (superpixels) can considerably reduce the number of samples for classification. However, even using superpixels the number of samples is still large for manual annotation of samples to train the classifier. Active learning techniques have been proposed to address the problem by starting from a small set of randomly selected samples, which are manually labeled and used to train a first instance of the classifier. At each learning iteration, the classifier assigns labels and selects the most informative samples for user correction/confirmation, increasing the size of the training set. An improved instance of the classifier is created by training, after each iteration, and used in the next iteration until the user is satisfied with the classifier. We observed that most methods reclassify the entire pool of unlabeled samples at every learning iteration, making the process unfeasible for user interaction. Therefore, we address two important problems in region-based classification of remote sensing images: (a) the effective superpixel description and (b) the reduction of the time required for sample selection in active learning. First, we propose a contextual superpixel descriptor, based on bag of visual words, that outperforms widely used color and texture descriptors. Second, we propose a supervised method for dataset reduction that is based on a state-of-art active learning technique, called Multi-Class Level Uncertainty (MCLU). Our method has shown to be as effective as MCLU, while being considerably more efficient. Additionally, we further improve its performance by applying a relaxation process on the classification map by using Markov Random FieldsMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Routing in waste collection: a simulated annealing algorithm for an Argentinean case study

    Get PDF
    The management of the collection of Municipal Solid Waste is a complex task for local governments since it consumes a large portion of their budgets. Thus, the use of computer-aided tools to support decision-making can contribute to improve the efficiency of the system and reduce the associated costs, especially in developing countries, which usually suffer from a shortage of resources. In the present work, a simulated annealing algorithm is proposed to address the problem of designing the routes of waste collection vehicles. The proposed algorithm is compared to a commercial solver based on a mixed-integer programming formulation and two other metaheuristic algorithms, i.e., a state-of-the-art large neighborhood search and a genetic algorithm. The evaluation is carried out on both a well-known benchmark from the literature and real instances of the Argentinean city of Bahía Blanca. The proposed algorithm was able to solve all the instances, having a performance similar to the large neighborhood procedure, while the genetic algorithm showed the worst results. The simulated annealing algorithm was also able to improve the solutions of the solver in many instances of the real dataset.Fil: Rossit, Diego Gabriel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Matemática Bahía Blanca. Universidad Nacional del Sur. Departamento de Matemática. Instituto de Matemática Bahía Blanca; Argentina. Universidad Nacional del Sur. Departamento de Ingeniería; ArgentinaFil: Toncovich, Adrián Andrés. Universidad Nacional del Sur. Departamento de Ingeniería; ArgentinaFil: Fermani, Matías. Universidad Nacional del Sur. Departamento de Ingeniería; Argentin

    Optimal sensor placement for sewer capacity risk management

    Get PDF
    2019 Spring.Includes bibliographical references.Complex linear assets, such as those found in transportation and utilities, are vital to economies, and in some cases, to public health. Wastewater collection systems in the United States are vital to both. Yet effective approaches to remediating failures in these systems remains an unresolved shortfall for system operators. This shortfall is evident in the estimated 850 billion gallons of untreated sewage that escapes combined sewer pipes each year (US EPA 2004a) and the estimated 40,000 sanitary sewer overflows and 400,000 backups of untreated sewage into basements (US EPA 2001). Failures in wastewater collection systems can be prevented if they can be detected in time to apply intervention strategies such as pipe maintenance, repair, or rehabilitation. This is the essence of a risk management process. The International Council on Systems Engineering recommends that risks be prioritized as a function of severity and occurrence and that criteria be established for acceptable and unacceptable risks (INCOSE 2007). A significant impediment to applying generally accepted risk models to wastewater collection systems is the difficulty of quantifying risk likelihoods. These difficulties stem from the size and complexity of the systems, the lack of data and statistics characterizing the distribution of risk, the high cost of evaluating even a small number of components, and the lack of methods to quantify risk. This research investigates new methods to assess risk likelihood of failure through a novel approach to placement of sensors in wastewater collection systems. The hypothesis is that iterative movement of water level sensors, directed by a specialized metaheuristic search technique, can improve the efficiency of discovering locations of unacceptable risk. An agent-based simulation is constructed to validate the performance of this technique along with testing its sensitivity to varying environments. The results demonstrated that a multi-phase search strategy, with a varying number of sensors deployed in each phase, could efficiently discover locations of unacceptable risk that could be managed via a perpetual monitoring, analysis, and remediation process. A number of promising well-defined future research opportunities also emerged from the performance of this research

    The design and applications of the african buffalo algorithm for general optimization problems

    Get PDF
    Optimization, basically, is the economics of science. It is concerned with the need to maximize profit and minimize cost in terms of time and resources needed to execute a given project in any field of human endeavor. There have been several scientific investigations in the past several decades on discovering effective and efficient algorithms to providing solutions to the optimization needs of mankind leading to the development of deterministic algorithms that provide exact solutions to optimization problems. In the past five decades, however, the attention of scientists has shifted from the deterministic algorithms to the stochastic ones since the latter have proven to be more robust and efficient, even though they do not guarantee exact solutions. Some of the successfully designed stochastic algorithms include Simulated Annealing, Genetic Algorithm, Ant Colony Optimization, Particle Swarm Optimization, Bee Colony Optimization, Artificial Bee Colony Optimization, Firefly Optimization etc. A critical look at these ‘efficient’ stochastic algorithms reveals the need for improvements in the areas of effectiveness, the number of several parameters used, premature convergence, ability to search diverse landscapes and complex implementation strategies. The African Buffalo Optimization (ABO), which is inspired by the herd management, communication and successful grazing cultures of the African buffalos, is designed to attempt solutions to the observed shortcomings of the existing stochastic optimization algorithms. Through several experimental procedures, the ABO was used to successfully solve benchmark optimization problems in mono-modal and multimodal, constrained and unconstrained, separable and non-separable search landscapes with competitive outcomes. Moreover, the ABO algorithm was applied to solve over 100 out of the 118 benchmark symmetric and all the asymmetric travelling salesman’s problems available in TSPLIB95. Based on the successful experimentation with the novel algorithm, it is safe to conclude that the ABO is a worthy contribution to the scientific literature

    Simulated Annealing

    Get PDF
    The book contains 15 chapters presenting recent contributions of top researchers working with Simulated Annealing (SA). Although it represents a small sample of the research activity on SA, the book will certainly serve as a valuable tool for researchers interested in getting involved in this multidisciplinary field. In fact, one of the salient features is that the book is highly multidisciplinary in terms of application areas since it assembles experts from the fields of Biology, Telecommunications, Geology, Electronics and Medicine

    Benefits and limits of machine learning for the implicit coordination on SON functions

    Get PDF
    Bedingt durch die Einführung neuer Netzfunktionen in den Mobilfunknetzen der nächsten Generation, z. B. Slicing oder Mehrantennensysteme, sowie durch die Koexistenz mehrerer Funkzugangstechnologien, werden die Optimierungsaufgaben äußerst komplex und erhöhen die OPEX (OPerational EXpenditures). Um den Nutzern Dienste mit wettbewerbsfähiger Dienstgüte (QoS) zu bieten und gleichzeitig die Betriebskosten niedrig zu halten, wurde von den Standardisierungsgremien das Konzept des selbstorganisierenden Netzes (SON) eingeführt, um das Netzmanagement um eine Automatisierungsebene zu erweitern. Es wurden dafür mehrere SON-Funktionen (SFs) vorgeschlagen, um einen bestimmten Netzbereich, wie Abdeckung oder Kapazität, zu optimieren. Bei dem konventionellen Entwurf der SFs wurde jede Funktion als Regler mit geschlossenem Regelkreis konzipiert, der ein lokales Ziel durch die Einstellung bestimmter Netzwerkparameter optimiert. Die Beziehung zwischen mehreren SFs wurde dabei jedoch bis zu einem gewissen Grad vernachlässigt. Daher treten viele widersprüchliche Szenarien auf, wenn mehrere SFs in einem mobilen Netzwerk instanziiert werden. Solche widersprüchlichen Funktionen in den Netzen verschlechtern die QoS der Benutzer und beeinträchtigen die Signalisierungsressourcen im Netz. Es wird daher erwartet, dass eine existierende Koordinierungsschicht (die auch eine Entität im Netz sein könnte) die Konflikte zwischen SFs lösen kann. Da diese Funktionen jedoch eng miteinander verknüpft sind, ist es schwierig, ihre Interaktionen und Abhängigkeiten in einer abgeschlossenen Form zu modellieren. Daher wird maschinelles Lernen vorgeschlagen, um eine gemeinsame Optimierung eines globalen Leistungsindikators (Key Performance Indicator, KPI) so voranzubringen, dass die komplizierten Beziehungen zwischen den Funktionen verborgen bleiben. Wir nennen diesen Ansatz: implizite Koordination. Im ersten Teil dieser Arbeit schlagen wir eine zentralisierte, implizite und auf maschinellem Lernen basierende Koordination vor und wenden sie auf die Koordination zweier etablierter SFs an: Mobility Robustness Optimization (MRO) und Mobility Load Balancing (MLB). Anschließend gestalten wir die Lösung dateneffizienter (d. h. wir erreichen die gleiche Modellleistung mit weniger Trainingsdaten), indem wir eine geschlossene Modellierung einbetten, um einen Teil des optimalen Parametersatzes zu finden. Wir nennen dies einen "hybriden Ansatz". Mit dem hybriden Ansatz untersuchen wir den Konflikt zwischen MLB und Coverage and Capacity Optimization (CCO) Funktionen. Dann wenden wir ihn auf die Koordinierung zwischen MLB, Inter-Cell Interference Coordination (ICIC) und Energy Savings (ES) Funktionen an. Schließlich stellen wir eine Möglichkeit vor, MRO formal in den hybriden Ansatz einzubeziehen, und zeigen, wie der Rahmen erweitert werden kann, um anspruchsvolle Netzwerkszenarien wie Ultra-Reliable Low Latency Communications (URLLC) abzudecken.Due to the introduction of new network functionalities in next-generation mobile networks, e.g., slicing or multi-antenna systems, as well as the coexistence of multiple radio access technologies, the optimization tasks become extremely complex, increasing the OPEX (OPerational EXpenditures). In order to provide services to the users with competitive Quality of Service (QoS) while keeping low operational costs, the Self-Organizing Network (SON) concept was introduced by the standardization bodies to add an automation layer to the network management. Thus, multiple SON functions (SFs) were proposed to optimize a specific network domain, like coverage or capacity. The conventional design of SFs conceived each function as a closed-loop controller optimizing a local objective by tuning specific network parameters. However, the relationship among multiple SFs was neglected to some extent. Therefore, many conflicting scenarios appear when multiple SFs are instantiated in a mobile network. Having conflicting functions in the networks deteriorates the users’ QoS and affects the signaling resources in the network. Thus, it is expected to have a coordination layer (which could also be an entity in the network), conciliating the conflicts between SFs. Nevertheless, due to interleaved linkage among those functions, it is complex to model their interactions and dependencies in a closed form. Thus, machine learning is proposed to drive a joint optimization of a global Key Performance Indicator (KPI), hiding the intricate relationships between functions. We call this approach: implicit coordination. In the first part of this thesis, we propose a centralized, fully-implicit coordination approach based on machine learning (ML), and apply it to the coordination of two well-established SFs: Mobility Robustness Optimization (MRO) and Mobility Load Balancing (MLB). We find that this approach can be applied as long as the coordination problem is decomposed into three functional planes: controllable, environmental, and utility planes. However, the fully-implicit coordination comes at a high cost: it requires a large amount of data to train the ML models. To improve the data efficiency of our approach (i.e., achieving good model performance with less training data), we propose a hybrid approach, which mixes ML with closed-form models. With the hybrid approach, we study the conflict between MLB and Coverage and Capacity Optimization (CCO) functions. Then, we apply it to the coordination among MLB, Inter-Cell Interference Coordination (ICIC), and Energy Savings (ES) functions. With the hybrid approach, we find in one shot, part of the parameter set in an optimal manner, which makes it suitable for dynamic scenarios in which fast response is expected from a centralized coordinator. Finally, we present a manner to formally include MRO in the hybrid approach and show how the framework can be extended to cover challenging network scenarios like Ultra-Reliable Low Latency Communications (URLLC)

    Solid Waste Collection Optimization: A literature Review

    Get PDF
    The urban population saw an increase of 80 million in 2019. The accelerated movement of people towards urban centres along with annual increasing per capita waste generation calls for an urgent need to address the rising solid waste generation. Contemporary pandemic of Covid-19 puts the demand all time high for revival and optimizing solid waste management system. For optimizing solid waste management, solid waste collection is the most important aspect of process as it includes majority of financial inputs. This article aims to provide literature review regarding different methodologies and criteria for solid waste collection optimization. The article also examines trends and areas of future research along with unexplored and budding domains. This would help reader identifying his interest area besides getting a comprehensive understanding of research trends. The study could also be used by waste management firms to analyze, compare different methods, their performance and their suitability under different environment conditions.
    • …
    corecore