19 research outputs found

    Sample selection via clustering to construct support vector-like classifiers

    Get PDF
    This paper explores the possibility of constructing RBF classifiers which, somewhat like support vector machines, use a reduced number of samples as centroids, by means of selecting samples in a direct way. Because sample selection is viewed as a hard computational problem, this selection is done after a previous vector quantization: this way obtaining also other similar machines using centroids selected from those that are learned in a supervised manner. Several forms of designing these machines are considered, in particular with respect to sample selection; as well as some different criteria to train them. Simulation results for well-known classification problems show very good performance of the corresponding designs, improving that of support vector machines and reducing substantially their number of units. This shows that our interest in selecting samples (or centroids) in an efficient manner is justified. Many new research avenues appear from these experiments and discussions, as suggested in our conclusions.Publicad

    Improving wordspotting performance with limited training data

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (leaves 149-155).by Eric I-Chao Chang.Ph.D

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Blancos blandos enfatizados para clasificación máquina

    Get PDF
    El modo más habitual de entrenar máquinas de clasificación es minimizar mediante búsqueda analítica una función de coste que depende de los valores de blancos y de salidas. Ello impone abandonar las salidas duras, no derivables. Además, el carácter discreto de los blancos no permite obtener buenos diseños considerando salidas lineales. De lo anterior surge la conveniencia de emplear las clásicas activaciones sigmoidales; ahora bien, su presencia no puede considerarse “natural” para cualesquiera problemas. Por otra parte, la ponderación de los errores entre blanco y salida de la máquina es una técnica bien conocida que permite conceder más atención a aquellos ejemplos que resulten más importantes para un buen aprendizaje. Esa importancia típicamente es función creciente del correspondiente error y de la proximidad a la frontera de decisión de las muestras, aunque en forma dependiente del problema y no conocida “a priori”. Lo dicho conduce a concebir la posibilidad de construir y aplicar blancos blandos enfatizados (“Emphasized Soft Targets”, ESTs): valores modificados de los blancos, en principio distintos para distintos ejemplos, establecidos según la relevancia de cada ejemplo para el aprendizaje. Con ello, cabe la posibilidad de prescindir de la activación, aprovechando el carácter “continuo” de los ESTs, al tiempo que el énfasis facilita obtener diseños de buenas prestaciones. Debe resaltarse que la supresión de la activación permite utilizar para clasificación formulaciones que son propias de la estimación, como es el caso del modelado directo de las muestras mediante mezcla de gaussianas (“Gaussian Mixture Models”, GMM) y, sobre todo, de los llamados procesos gaussianos (“Gaussian Processes”, GP), versión generalizada del filtro de Wiener y que presenta numerosas ventajas de manejo e interpretacin frente a métodos alternativos de regresión no lineal. La presente Tesis explora la utilización de ESTs para resolver problemas de clasificación, considerando tanto esquemas tradicionales -perceptrones multicapa (“Multi-Layer Perceptrons”, MLPs) entre los discriminativos, GMMs entre los generativos- como los ya mencionados GPs. Se presenta y aplica una poderosa forma de ESTs, consistente en una combinación convexa local del blanco original y la salida de un clasificador auxiliar o guía, siendo el parámetro funcional de combinación dependiente del error y la proximidad a la frontera de cada muestra tratada por la guía. Los resultados obtenidos indican que estos ESTs permiten frecuentemente alcanzar mejores (y muy altas) prestaciones, si bien a cambio de un sensible inconveniente de carga computacional debido a la necesidad de determinar mediante validación cruzada (“Cross Validation”, CV) los valores de los parámetros de la forma de los ESTs. Versiones simplificadas llevan a situaciones intermedias. Una sostenida reflexión sobre el desarrollo del trabajo y los resultados de éste condujo a determinar una semejanza funcional inmediata entre ponderaciones de errores y ESTs, así como a una interpretación del papel de las activaciones desde la perspectiva de regulación de la atención que se dedica a las diferentes muestras. Ello abre la posibilidad de recurrir a conversiones de ponderaciones a ESTs y de ESTs a ponderaciones en una serie de situaciones en que cabe esperar ventajas -mejora de prestaciones o simplificación de arquitecturas-, así como de disponer de orientación para elegir formas de las activaciones. Tales posibilidades se examinan y discuten en el Capítulo 6, y las más atractivas se incluyen como sugerencia de líneas futuras, junto con otras y la revisión de las aportaciones de la Tesis, en el último capítulo. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------The most frequent training of classification machines consists on minimizing, by means of an analytical search, a cost function which depends on targets and output values. It does not allow the use of hard outputs, that are not derivable. Furthermore, the discrete character of the classification targets does not permit to get good designs with linear outputs. The importance of introducing the classical sigmoidal activations emerges from the above facts; but it is clear that these activations cannot be considered as “natural” for all the classification problems. On the other hand, weighting output errors is a well known technique that serves to pay more attention to those examples that are more relevant for a good learning. This relevance is usually related with the corresponding error and with the proximity to the decision border of each sample, although in a previously unknown and problem dependent manner. The previous facts suggest the possibility of constructing and applying Emphasized Soft Targets (ESTs): Modified values for the targets, basically different for different examples, and defined according to the relevance of each labeled sample for the learning process. In this way, it is possible to avoid the presence of the nonlinear activation, because the ESTs are “continuous”, and, simultaneously, the effect of the emphasis helps to obtain a good performance. We remark that suppressing the nonlinear activation permits to develop classifier designs that are based on regression models, such as the direct form of Gaussian Mixture Models (GMMs), and, mainly, the so-called Gaussian Processes (GPs), a generalized version of the famous Wiener filter, which offers many working and interpretation advantages in comparison with alternative nonlinear regression methods. This Thesis explores the use of ESTs to solve classification problems, considering both traditional machines -Multi-Layer Perceptrons (MLPs) among the discriminative family, GMMs among the generative schemes- and GPs. A powerful form of ESTs is introduced and applied; it consists on a local convex combination of the original target and the output of an auxiliary classifier, or “guide”. The functional combination parameter depends on each sample’s error and its proximity to the border according to the auxiliary classifier. A lot of experimental results support the hypothesis of that ESTs frequently allow to get a better (and very high) performance, although paying a significant increase of the training computational effort, due to the need of carrying out Cross Validation (CV) processes to establish the values of ESTs parameters. Simplified versions of the proposed EST forms offer intermediate levels of compromise. Thinking all the time on the work being developed and its results led to find an immediate functional similarity between error weighting and ESTs, as well as to an interpretation of the role of nonlinear activations from the perspective of controlling the degree of attention to different examples. This opens the way to converting sample weighting methods to ESTs and viceversa in a series of situations that promise advantages when doing so (better performance or even simpler architectures). Also, the said perspective on the role of the nonlinear activations gives a guide to select their forms. All these possibilities are considered and discussed in Chapter 6, and the most promising of them are included as suggestions of new research lines, along with other opportunities and a resume of contributions, in the final chapter

    Task Allocation in Foraging Robot Swarms:The Role of Information Sharing

    Get PDF
    Autonomous task allocation is a desirable feature of robot swarms that collect and deliver items in scenarios where congestion, caused by accumulated items or robots, can temporarily interfere with swarm behaviour. In such settings, self-regulation of workforce can prevent unnecessary energy consumption. We explore two types of self-regulation: non-social, where robots become idle upon experiencing congestion, and social, where robots broadcast information about congestion to their team mates in order to socially inhibit foraging. We show that while both types of self-regulation can lead to improved energy efficiency and increase the amount of resource collected, the speed with which information about congestion flows through a swarm affects the scalability of these algorithms

    Water rights and related water supply issues

    Get PDF
    Presented during the USCID water management conference held on October 13-16, 2004 in Salt Lake City, Utah. The theme of the conference was "Water rights and related water supply issues."Includes bibliographical references.Proceedings sponsored by the U.S. Department of the Interior, Central Utah Project Completion Act Office and the U.S. Committee on Irrigation and Drainage.Consensus building as a primary tool to resolve water supply conflicts -- Administration to Colorado River allocations: the Law of the River and the Colorado River Water Delivery Agreement of 2003 -- Irrigation management in Afghanistan: the tradition of Mirabs -- Institutional reforms in irrigation sector of Pakistan: an approach towards integrated water resource management -- On-line and real-time water right allocation in Utah's Sevier River basin -- Improving equity of water distribution: the challenge for farmer organizations in Sindh, Pakistan -- Impacts from transboundary water rights violations in South Asia -- Impacts of water conservation and Endangered Species Act on large water project planning, Utah Lake Drainage Basin Water Delivery System, Bonneville Unit of the Central Utah Project -- Economic importance and environmental challenges of the Awash River basin to Ethiopia -- Accomplishing the impossible: overcoming obstacles of a combined irrigation project -- Estimating actual evapotranspiration without land use classification -- Improving water management in irrigated agricultue -- Beneficial uses of treated drainage water -- Comparative assessment of risk mitigation options for irrigated agricutlrue -- A multi-variable approach for the command of Canal de Provence Aix Nord Water Supply Subsystem -- Hierarchical Bayesian Analysis and Statistical Learning Theory II: water management application -- Soil moisture data collection and water supply forecasting -- Development and implementation of a farm water conservation program within the Coachella Valley Water District, California -- Concepts of ground water recharge and well augmentation in northeastern Colorado -- Water banking in Colorado: an experiment in trouble? -- Estimating conservable water in the Klamath Irrigation Project -- Socio-economic impacts of land retirement in Westlands Water District -- EPDM rubber lining system chosen to save valuable irrigation water -- A user-centered approach to develop decision support systems for estimating pumping and augmentation needs in Colorado's South Platte basin -- Utah's Tri-County Automation Project -- Using HEC-RAS to model canal systems -- Potential water and energy conservation and improved flexibility for water users in the Oasis area of the Coachella Valley Water District, California
    corecore