47 research outputs found

    Generalised fourier analysis of human chromosome images

    Get PDF

    Hierarchical Clustering of Evolutionary Multiobjective Programming Results to Inform Land Use Planning

    Get PDF
    Multiobjective optimization is a branch of mathematical programming for modelling problems with multiple conflicting objectives. Multiobjective optimization problems can be solved using Pareto optimization techniques including evolutionary multiobjective optimization algorithms. Many real world applications involve multiple objective functions and can be addressed within a multiobjective optimization framework. Multiobjective optimization methods allow exploration of the attainable values of the objective functions and trade-offs between objective functions without soliciting preference information from the decision maker(s) before potential solutions are presented. In order to be sufficiently representative of the possibilities and trade-offs, the results of multiobjective optimization may be too numerous or complex in shape for decision makers to reasonably consider. Previous approaches to this problem have aimed to reduce the solution set to a smaller representative set. The methodology developed and evaluated in this thesis employs hierarchical cluster analysis to organize the solutions from multiobjective optimiation into a tree structure based on their objective function values. Unlike previous approaches none of the solutions are removed from consideration before being presented to the decision makers. A hierarchical cluster structure is desirable since it presents a nested organization of the plans which can be used in decision making as shown in an example decision. The resulting dendrogram is a tree of clusters that can be used to see the attainable trade-offs on the Pareto front. As well, it can be used to interactively reduce the set of solutions under consideration or consider several subsets of solutions that lie in different regions of the Pareto front. A land use change problem in an urban fringe area in Southern Ontario, Canada is used as motivation and as an example application to evaluate the proposed methodology. Relevant literature in planning support systems is reviewed in order to focus the methodology on the application. The multiobjective optimization problem for this application was formulated and analyzed by Roberts (2003); the optimization algorithm used to generate the approximation of the optimal solutions is the Non-dominated Sorting Genetic Algorithm II, NSGA-II, developed by Deb et al. (2002). Future work will link the resulting objective function-based tree to map visualizations of the landscape under consideration. Decision makers will be able to use the tree structure to explore different potential land use plans based on their performance on the objective functions representing the quality of those plans for natural and human uses. This approach is applicable to multiobjective problems with more than three objective functions and discrete decision variables or hierarchically clustered Pareto optimal sets. The suitability for reuse with other datasets or other applications is discussed as well as the potential for inclusion in a decision support system (DSS)

    Graph matching using position coordinates and local features for image analysis

    Get PDF
    Encontrar las correspondencias entre dos imágenes es un problema crucial en el campo de la visión por ordenador i el reconocimiento de patrones. Es relevante para un amplio rango de propósitos des de aplicaciones de reconocimiento de objetos en las áreas de biometría, análisis de documentos i análisis de formas hasta aplicaciones relacionadas con la geometría desde múltiples puntos de vista tales cómo la recuperación de la pose, estructura desde el movimiento y localización y mapeo. La mayoría de las técnicas existentes enfocan este problema o bien usando características locales en la imagen o bien usando métodos de registro de conjuntos de puntos (o bien una mezcla de ambos). En las primeras, un conjunto disperso de características es primeramente extraído de las imágenes y luego caracterizado en la forma de vectores descriptores usando evidencias locales de la imagen. Las características son asociadas según la similitud entre sus descriptores. En las segundas, los conjuntos de características son considerados cómo conjuntos de puntos los cuales son asociados usando técnicas de optimización no lineal. Estos son procedimientos iterativos que estiman los parámetros de correspondencia y de alineamiento en pasos alternados. Los grafos son representaciones que contemplan relaciones binarias entre las características. Tener en cuenta relaciones binarias al problema de la correspondencia a menudo lleva al llamado problema del emparejamiento de grafos. Existe cierta cantidad de métodos en la literatura destinados a encontrar soluciones aproximadas a diferentes instancias del problema de emparejamiento de grafos, que en la mayoría de casos es del tipo "NP-hard". El cuerpo de trabajo principal de esta tesis está dedicado a formular ambos problemas de asociación de características de imagen y registro de conjunto de puntos como instancias del problema de emparejamiento de grafos. En todos los casos proponemos algoritmos aproximados para solucionar estos problemas y nos comparamos con un número de métodos existentes pertenecientes a diferentes áreas como eliminadores de "outliers", métodos de registro de conjuntos de puntos y otros métodos de emparejamiento de grafos. Los experimentos muestran que en la mayoría de casos los métodos propuestos superan al resto. En ocasiones los métodos propuestos o bien comparten el mejor rendimiento con algún método competidor o bien obtienen resultados ligeramente peores. En estos casos, los métodos propuestos normalmente presentan tiempos computacionales inferiores.Trobar les correspondències entre dues imatges és un problema crucial en el camp de la visió per ordinador i el reconeixement de patrons. És rellevant per un ampli ventall de propòsits des d’aplicacions de reconeixement d’objectes en les àrees de biometria, anàlisi de documents i anàlisi de formes fins aplicacions relacionades amb geometria des de múltiples punts de vista tals com recuperació de pose, estructura des del moviment i localització i mapeig. La majoria de les tècniques existents enfoquen aquest problema o bé usant característiques locals a la imatge o bé usant mètodes de registre de conjunts de punts (o bé una mescla d’ambdós). En les primeres, un conjunt dispers de característiques és primerament extret de les imatges i després caracteritzat en la forma de vectors descriptors usant evidències locals de la imatge. Les característiques son associades segons la similitud entre els seus descriptors. En les segones, els conjunts de característiques son considerats com conjunts de punts els quals son associats usant tècniques d’optimització no lineal. Aquests son procediments iteratius que estimen els paràmetres de correspondència i d’alineament en passos alternats. Els grafs son representacions que contemplen relacions binaries entre les característiques. Tenir en compte relacions binàries al problema de la correspondència sovint porta a l’anomenat problema de l’emparellament de grafs. Existeix certa quantitat de mètodes a la literatura destinats a trobar solucions aproximades a diferents instàncies del problema d’emparellament de grafs, el qual en la majoria de casos és del tipus “NP-hard”. Una part del nostre treball està dedicat a investigar els beneficis de les mesures de ``bins'' creuats per a la comparació de característiques locals de les imatges. La resta està dedicat a formular ambdós problemes d’associació de característiques d’imatge i registre de conjunt de punts com a instàncies del problema d’emparellament de grafs. En tots els casos proposem algoritmes aproximats per solucionar aquests problemes i ens comparem amb un nombre de mètodes existents pertanyents a diferents àrees com eliminadors d’“outliers”, mètodes de registre de conjunts de punts i altres mètodes d’emparellament de grafs. Els experiments mostren que en la majoria de casos els mètodes proposats superen a la resta. En ocasions els mètodes proposats o bé comparteixen el millor rendiment amb algun mètode competidor o bé obtenen resultats lleugerament pitjors. En aquests casos, els mètodes proposats normalment presenten temps computacionals inferiors

    Optimization and Mining Methods for Effective Real-Time Embedded Systems

    Get PDF
    L’Internet des objets (IoT) est le réseau d’objets interdépendants, comme les voitures autonomes, les appareils électroménagers, les téléphones intelligents et d’autres systèmes embarqués. Ces systèmes embarqués combinent le matériel, le logiciel et la connection réseau permettant le traitement de données à l’aide des puissants centres de données de l’informatique nuagique. Cependant, la croissance exponentielle des applications de l’IoT a remodelé notre croyance sur l’informatique nuagique, et des certitudes durables sur ses capacités ont dû être mises à jour. De nos jours, l’informatique nuagique centralisé et classique rencontre plusieurs défis, tels que la latence du trafic, le temps de réponse et la confidentialité des données. Alors, la tendance dans le traitement des données générées par les dispositifs embarqués interconnectés consiste à faire plus de calcul au niveau du dispositif au bord du réseau. Cette possibilité de faire du traitement local aide à réduire la latence pour les applications temps réel présentant des fortes contraintes temporelles. Aussi, ça permet d’améliorer le traitement des quantités massives de données générées par ces périphériques. Réussir cette transition nécessite la conception de systèmes embarqués de haute performance en explorant efficacement les alternatives de conception (i.e. Exploration efficace de l’espace des solutions), en optimisant la topologie de déploiement des applications temps réel sur des architectures multi-processeurs (i.e. la façon dont le logiciel utilise le matériel) , et des algorithme d’exploration permettant un fonctionnement plus intelligent de ces dispositifs. Des efforts de recherche récents ont conduit à diverses approches automatisées facilitant la conception et l’amélioration du fonctionnement des système embarqués. Cependant, ces techniques existantes présentent plusieurs défis majeurs. Ces défis sont fortement présents sur les systèmes embarqués temps réel. Quatre des principaux défis sont : (1) Le manque de techniques d’exploration de données en ligne permettant l’amélioration des performances des systèmes embarqués. (2) L’utilisation inefficace des ressources informatiques des systèmes multiprocesseurs lors du déploiement de logiciels là dessus ; (3) L’exploration pseudo-aléatoire de l’espace des solutions (4) La sélection de la configuration appropriée à partir de la listes des solutions optimales obtenue.----------ABSTRACT: The Internet of things (IoT) is the network of interrelated devices or objects, such as selfdriving cars, home appliances, smart-phones and other embedded computing systems. It combines hardware, software, and network connectivity enabling data processing using powerful cloud data centers. However, the exponential rise of IoT applications reshaped our belief on the cloud computing, and long-lasting certainties about its capabilities had to be updated. The classical centralized cloud computing is encountering several challenges, such as traffic latency, response time, and data privacy. Thus, the trend in the processing of the generated data of IoT inter-connected embedded devices has shifted towards doing more computation closer to the device in the edge of the network. This possibility to do on-device processing helps to reduce latency for critical real-time applications and better processing of the massive amounts of data being generated by the these devices. Succeeding this transition towards the edge computing requires the design of high-performance embedded systems by efficiently exploring design alternatives (i.e. efficient Design Space Exploration), optimizing the deployment topology of multi-processor based real-time embedded systems (i.e. the way the software utilizes the hardware), and light mining techniques enabling smarter functioning of these devices. Recent research efforts on embedded systems have led to various automated approaches facilitating the design and the improvement of their functioning. However, existing methods and techniques present several major challenges. These challenges are more relevant when it comes to real-time embedded systems. Four of the main challenges are : (1) The lack of online data mining techniques that can enhance embedded computing systems functioning on the fly ; (2) The inefficient usage of computing resources of multi-processor systems when deploying software on ; (3) The pseudo-random exploration of the design space ; (4) The selection of the suitable implementation after performing the otimization process

    Intelligent Sensor Networks

    Get PDF
    In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts

    Efficient case-based reasoning through feature weighting, and its application in protein crystallography

    Get PDF
    Data preprocessing is critical for machine learning, data mining, and pattern recognition. In particular, selecting relevant and non-redundant features in highdimensional data is important to efficiently construct models that accurately describe the data. In this work, I present SLIDER, an algorithm that weights features to reflect relevance in determining similarity between instances. Accurate weighting of features improves the similarity measure, which is useful in learning algorithms like nearest neighbor and case-based reasoning. SLIDER performs a greedy search for optimum weights in an exponentially large space of weight vectors. Exhaustive search being intractable, the algorithm reduces the search space by focusing on pivotal weights at which representative instances are equidistant to truly similar and different instances in Euclidean space. SLIDER then evaluates those weights heuristically, based on effectiveness in properly ranking pre-determined matches of a set of cases, relative to mismatches. I analytically show that by choosing feature weights that minimize the mean rank of matches relative to mismatches, the separation between the distributions of Euclidean distances for matches and mismatches is increased. This leads to a better distance metric, and consequently increases the probability of retrieving true matches from a database. I also discuss how SLIDER is used to improve the efficiency and effectiveness of case retrieval in a case-based reasoning system that automatically interprets electron density maps to determine the three-dimensional structures of proteins. Electron density patterns for regions in a protein are represented by numerical features, which are used in a distance metric to efficiently retrieve matching patterns by searching a large database. These pre-selected cases are then evaluated by more expensive methods to identify truly good matches – this strategy speeds up the retrieval of matching density regions, thereby enabling fast and accurate protein model-building. This two-phase case retrieval approach is potentially useful in many case-based reasoning systems, especially those with computationally expensive case matching and large case libraries

    Vol. 16, No. 1 (Full Issue)

    Get PDF

    Assessment and Redesign of the Synoptic Water Quality Monitoring Network in the Great Smoky Mountains National Park

    Get PDF
    The purpose of this study was to assess and redesign an existing 83-site synoptic water quality monitoring network in the Great Smoky Mountains National Park. The study involved a spatial analysis of water quality data (pH, ANC, conductivity, chloride, nitrate, sulfate, sodium, and potassium), watershed characteristics (geology, morphology, and vegetation), and collocated site information to determine which sites were redundant and a temporal analysis to determine the effectiveness of the current sampling frequency to detect long-term trends. The spatial analysis employed a simulated annealing algorithm using the variable costs of the network and the results of multivariate data techniques to identify an optimized subset of the existing sampling sites based on a maximization of benefits. A second simulated annealing algorithm was created to identify optimum user-defined monitoring networks of n sites and to validate the results of the first simulated annealing program. The first simulated annealing program identified an optimized network consisting of 67 of the existing 83 sampling sites. The second simulated annealing algorithm bracketed the same 67 sites and also provided a basis for an ordered discontinuation of sampling sites by identifying the best ten-site monitoring network through the best 70-site monitoring network. The temporal analysis employed the “effective” sample method, Sen\u27s slope estimator, Mann-Kendall test for trend, and a boxplot analysis to determine the effectiveness and the power of the current sampling frequency to detect long-term trends. The results showed that the current sampling frequency of four samples per year presents a low statistical power for short historical records. However, increasing the v sampling frequency to more than 12 samples per year creates serial dependence between samples. By combining the results of the spatial and temporal analyses a new network is proposed by dividing the network into primary, secondary, and tertiary sites with sampling frequencies of six and 12 samples per year. Seventeen new sites are also proposed to collect additional data above 3000 feet MSL because the existing number of sampling sites is not proportional to park area in certain elevation ranges
    corecore