8 research outputs found

    Temporal and Spatial Features of Single-Trial EEG for Brain-Computer Interface

    Get PDF
    Brain-computer interface (BCI) systems create a novel communication channel from the brain to an output device bypassing conventional motor output pathways of nerves and muscles. Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. With respect to the topographic patterns of brain rhythm modulations, the common spatial patterns (CSPs) algorithm has been proven to be very useful to produce subject-specific and discriminative spatial filters; but it didn't consider temporal structures of event-related potentials which may be very important for single-trial EEG classification. In this paper, we propose a new framework of feature extraction for classification of hand movement imagery EEG. Computer simulations on real experimental data indicate that independent residual analysis (IRA) method can provide efficient temporal features. Combining IRA features with the CSP method, we obtain the optimal spatial and temporal features with which we achieve the best classification rate. The high classification rate indicates that the proposed method is promising for an EEG-based brain-computer interface

    Using biased support vector machine in image retrieval with self-organizing map.

    Get PDF
    Chan Chi Hang.Thesis submitted in: August 2004.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 105-114).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Problem Statement --- p.3Chapter 1.2 --- Major Contributions --- p.5Chapter 1.3 --- Publication List --- p.6Chapter 1.4 --- Thesis Organization --- p.7Chapter 2 --- Background Survey --- p.9Chapter 2.1 --- Relevance Feedback Framework --- p.9Chapter 2.1.1 --- Relevance Feedback Types --- p.11Chapter 2.1.2 --- Data Distribution --- p.12Chapter 2.1.3 --- Training Set Size --- p.14Chapter 2.1.4 --- Inter-Query Learning and Intra-Query Learning --- p.15Chapter 2.2 --- History of Relevance Feedback Techniques --- p.16Chapter 2.3 --- Relevance Feedback Approaches --- p.19Chapter 2.3.1 --- Vector Space Model --- p.19Chapter 2.3.2 --- Ad-hoc Re-weighting --- p.26Chapter 2.3.3 --- Distance Optimization Approach --- p.29Chapter 2.3.4 --- Probabilistic Model --- p.33Chapter 2.3.5 --- Bayesian Approach --- p.39Chapter 2.3.6 --- Density Estimation Approach --- p.42Chapter 2.3.7 --- Support Vector Machine --- p.48Chapter 2.4 --- Presentation Set Selection --- p.52Chapter 2.4.1 --- Most-probable strategy --- p.52Chapter 2.4.2 --- Most-informative strategy --- p.52Chapter 3 --- Biased Support Vector Machine for Content-Based Image Retrieval --- p.57Chapter 3.1 --- Motivation --- p.57Chapter 3.2 --- Background --- p.58Chapter 3.2.1 --- Regular Support Vector Machine --- p.59Chapter 3.2.2 --- One-class Support Vector Machine --- p.61Chapter 3.3 --- Biased Support Vector Machine --- p.63Chapter 3.4 --- Interpretation of parameters in BSVM --- p.67Chapter 3.5 --- Soft Label Biased Support Vector Machine --- p.69Chapter 3.6 --- Interpretation of parameters in Soft Label BSVM --- p.73Chapter 3.7 --- Relevance Feedback Using Biased Support Vector Machine --- p.74Chapter 3.7.1 --- Advantages of BSVM in Relevance Feedback . . --- p.74Chapter 3.7.2 --- Relevance Feedback Algorithm By BSVM --- p.75Chapter 3.8 --- Experiments --- p.78Chapter 3.8.1 --- Synthetic Dataset --- p.80Chapter 3.8.2 --- Real-World Dataset --- p.81Chapter 3.8.3 --- Experimental Results --- p.83Chapter 3.9 --- Conclusion --- p.86Chapter 4 --- Self-Organizing Map-based Inter-Query Learning --- p.88Chapter 4.1 --- Motivation --- p.88Chapter 4.2 --- Algorithm --- p.89Chapter 4.2.1 --- Initialization and Replication of SOM --- p.89Chapter 4.2.2 --- SOM Training for Inter-Query Learning --- p.90Chapter 4.2.3 --- Incorporate with Intra-Query Learning --- p.92Chapter 4.3 --- Experiments --- p.93Chapter 4.3.1 --- Synthetic Dataset --- p.95Chapter 4.3.2 --- Real-World Dataset --- p.95Chapter 4.3.3 --- Experimental Results --- p.97Chapter 4.4 --- Conclusion --- p.98Chapter 5 --- Conclusion --- p.102Bibliography --- p.10

    Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond

    Full text link
    [ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilístico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado. En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana.[CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilístic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta càncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta càncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la pràctica. Quan això passa, diem que un model està perfectament calibrat. En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implícita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocàstic que serveix com a distribució a priori en un problema d'inferència Bayesiana.[EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated. In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem.Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582TESI

    Técnicas de optimización paralelas : esquema híbrido basado en hiperheurísticas y computación evolutiva

    Get PDF
    Optimisation is the process of selecting the best element fr om a set of available alternatives. Solutions are termed good or bad depending on its performance for a set of objectives. Several algorithms to deal with such kind of problems have been defined in the literature. Metaheuristics are one of the most prominent techniques. They are a class of modern heuristics whose main goal is to com bine heuristics in a problem independent way with the aim of improving their per formance. Meta- heuristics have reported high-quality solutions in severa l fields. One of the reasons of the good behaviour of metaheuristics is that they are defin ed in general terms. Therefore, metaheuristic algorithms can be adapted to fit th e needs of most real-life optimisation. However, such an adaptation is a hard task, and it requires a high computational and user effort. There are two main ways of reducing the effort associated to th e usage of meta- heuristics. First, the application of hyperheuristics and parameter setting strategies facilitates the process of tackling novel optimisation pro blems and instances. A hyperheuristic can be viewed as a heuristic that iterativel y chooses between a set of given low-level metaheuristics in order to solve an optim isation problem. By using hyperheuristics, metaheuristic practitioners do no t need to manually test a large number of metaheuristics and parameterisations for d iscovering the proper algorithms to use. Instead, they can define the set of configur ations which must be tested, and the model tries to automatically detect the be st-behaved ones, in order to grant more resources to them. Second, the usage of pa rallel environments might speedup the process of automatic testing, so high qual ity solutions might be achieved in less time. This research focuses on the design of novel hyperheuristic s and defines a set of models to allow their usage in parallel environments. Differ ent hyperheuristics for controlling mono-objective and multi-objective multi-po int optimisation strategies have been defined. Moreover, a set of novel multiobjectivisa tion techniques has been proposed. In addition, with the aim of facilitating the usage of multiobjectivi- sation, the performance of models that combine the usage of m ultiobjectivisation and hyperheuristics has been studied. The proper performance of the proposed techniques has been v alidated with a set of well-known benchmark optimisation problems. In addi tion, several practical and complex optimisation problems have been addressed. Som e of the analysed problems arise in the communication field. In addition, a pac king problem proposed in a competition has been faced up. The proposals for such pro blems have not been limited to use the problem-independent schemes. Inste ad, new metaheuristics, operators and local search strategies have been defined. Suc h schemes have been integrated with the designed parallel hyperheuristics wit h the aim of accelerating the achievement of high quality solutions, and with the aim of fa cilitating their usage. In several complex optimisation problems, the current best -known solutions have been found with the methods defined in this dissertation.Los problemas de optimización son aquellos en los que hay que elegir cuál es la solución más adecuada entre un conjunto de alternativas. Actualmente existe una gran cantidad de algoritmos que permiten abordar este tipo de problemas. Entre ellos, las metaheurísticas son una de las técnicas más usadas. El uso de metaheurísticas ha posibilitado la resolución de una gran cantidad de problemas en diferentes campos. Esto se debe a que las metaheurísticas son técnicas generales, con lo que disponen de una gran cantidad de elementos o parámetros que pueden ser adaptados a la hora de afrontar diferentes problemas de optimización. Sin embargo, la elección de dichos parámetros no es sencilla, por lo que generalmente se requiere un gran esfuerzo computacional, y un gran esfuerzo por parte del usuario de estas técnicas. Existen diversas técnicas que atenúan este inconveniente. Por un lado, existen varios mecanismos que permiten seleccionar los valores de dichos parámetros de forma automática. Las técnicas más simples utilizan valores fijos durante toda la ejecución, mientras que las técnicas más avanzadas, como las hiperheurísticas, adaptan los valores usados a las necesidades de cada fase de optimización. Además, estas técnicas permiten usar varias metaheurísticas de forma simultánea. Por otro lado, el uso de técnicas paralelas permite acelerar el proceso de testeo automático, reduciendo el tiempo necesario para obtener soluciones de alta calidad. El objetivo principal de esta tesis ha sido diseñar nuevas hiperheurísticas e integrarlas en el modelo paralelo basado en islas. Estas técnicas se han usado para controlar los parámetros de varias metaheurísticas evolutivas. Se han definido diversas hiperheurísticas que han permitido abordar tanto problemas mono-objetivo como problemas multi-objetivo. Además, se han definido un conjunto de multiobjetivizaciones, que a su vez se han beneficiado de las hiperheurísticas propuestas. Las técnicas diseñadas se han validado con algunos de los problemas de test más ampliamente utilizados. Además, se han abordado un conjunto de problemas de optimización prácticos. Concretamente, se han tratado tres problemas que surgen en el ámbito de las telecomunicaciones, y un problema de empaquetado. En dichos problemas, además de usar las hiperheurísticas y multiobjetivizaciones, se han definido nuevos algoritmos, operadores, y estrategias de búsqueda local. En varios de los problemas, el uso combinado de todas estas técnicas ha posibilitado obtener las mejores soluciones encontradas hasta el momento

    Socio-Environmental Vulnerability Assessment for Sustainable Management

    Get PDF
    This Special Issue explores the cross-disciplinary approaches, methodologies, and applications of socio-environmental vulnerability assessment that can be incorporated into sustainable management. The volume comprises 20 different points of view, which cover environmental protection and development, urban planning, geography, public policymaking, participation processes, and other cross-disciplinary fields. The articles collected in this volume come from all over the world and present the current state of the world’s environmental and social systems at a local, regional, and national level. New approaches and analytical tools for the assessment of environmental and social systems are studied. The practical implementation of sustainable development as well as progressive environmental and development policymaking are discussed. Finally, the authors deliberate about the perspectives of social–environmental systems in a rapidly changing world

    Contribució als algoritmes de construcció de models del món per a la implementació en Arquitectures Àgils de Fabricació

    Get PDF
    The present work composes a contribution towards the Construction of World models for its implementation in 'Agile Manufacturing Architectures', aiming to take a step further the control programs for manufacturing systems, making it go from being mere tasks implementers to be entities with 'intelligence' that allow them to decide for themselves what is the best strategy to approach a certain given task. In other words, the input information to the production system should stop being a deterministic sequence of commands to become a specification of initial and final states. The work builds on previous results of Gomà and Vivancos to build logical models of simple systems and enunciates some corollaries relating to its operation. Then, it develops new algorithms based on the main stages of World Exploration and Tasks Implementation; initially only for Worlds populated by binary variables and later with the introduction of the treatment of continuous variables. These algorithms, innovative as they introduce the possibility of applying logical prejudices about the world, can apply different strategies to build world models. To evaluate the applicability of these algorithms it is programmed in C+ an experimentation platform for particularised systems and a specification according to the variables that should be utilised in the implementation of these algorithms in different types of manufacturing equipment (Machine tools for Subtractive methods and Additive Manufacturing systems) as well as in complex systems such as the 'Agile Manufacturing Architectures', that have been studied and materialized in works in the context of the present Thesis. In recent years, the paradigm of manufacturing has changed. China and Asia have become the factory of the world and all developed countries have had to began aggressive reindustrialization campaigns to relocate the industry lost. In some cases, particularly relevant sectors 'like the biomedical sector, the Toys case and the Consumer products-, have been presented as a golden opportunity to achieve promising results and have been the subject of an in-depth analysis in this work. Meanwhile, during these years, research and development of manufacturing systems have not been stopped; in fact, it has emerged a new community called 'Makers', built upon very well trained users, motivated by non-profit aspirations that are making to change the game rules. Soon, the personal digital fabrication and the virtual generation and sharing of content will end up to change the way of producing products (and therefore to conceive, to transport, to use and to trade with them), making possible a movement that is being considered as the 'Democratization of the production'. The algorithms presented are intended to maintain a high level of abstraction. 'Action' and 'detection' are internally treated as entirely independent processes, so the system must necessarily learn by an internal logical process. Moreover, beyond the scope of the contribution of this Thesis, the aim of this work is being able to provide a functional specification that can be made available to the community and may serve as a seed to allow the development of intelligent manufacturing paradigms (iCAM) in truly Agile Manufacturing Architectures.La present Tesi Doctoral realitza una contribució a la Construcció de Models del Món per a la implementació en Arquitectures Àgils de Fabricació, amb la intenció de portar un pas més enllà els programes de control dels sistemes de fabricació, tot fent que passin de ser simples executors de tasques a ser elements amb 'intel·ligència' que els permeti decidir per ells mateixos quina és la millor estratègia per abordar una tasca encomanada. Dit d'altra manera, la informació d'entrada al sistema de fabricació ha de deixar de ser una seqüència determinista de comandes per convertir-se en una especificació d'Estats inicial i final. El treball parteix dels treballs previs de Gomà i Vivancos per construir models lògics de sistemes senzills i n'enuncia uns corol·laris relatius al seu funcionament. A continuació, desenvolupa nous algoritmes basats en les etapes principals d'Exploració del Món i d'Execució de Tasques; primer per móns només poblats per variables binàries i més tard amb la introducció del tractament de contínues. Aquests algoritmes, innovadors, ja que introdueixen la possibilitat d'aplicar prejudicis lògics sobre el món, permeten aplicar diferents estratègies de construcció de Models del Món. Per a avaluar la bona aplicabilitat d¿aquests algoritmes, es realitza la programació d'una plataforma d¿experimentació en llenguatge C+ i es particularitza una especificació de sistemes segons les seves variables per tal d'interpretar com hauria de ser la implementació d'aquests en diferents tipologies de Màquina (Fabricació per arrencament de ferritja i Fabricació Additiva), així com en sistemes complexos com les Arquitectures Àgils de Fabricació que han estat objecte d'estudi i de materialització en treballs a l'entorn de la present Tesi Doctoral. Durant els darrers anys, el paradigma de la fabricació ha canviat. Xina i l'Àsia s'han convertit en la fàbrica de tot el món i els països desenvolupats han hagut de començar campanyes de reindustrialització molt agressives per relocalitzar la industria perduda. En alguns casos, sectors especialment rellevants -com el cas biomèdic, el cas de la joguina i els productes de consum- s'han presentat com a oportunitats daurades per assolir resultats esperançadors i han estat objecte d'un anàlisi en profunditat en el present treball. Paral·lelament, durant aquests anys, la recerca i el desenvolupament de sistemes de fabricació no han estat aturats; de fet ha aflorat amb força una nova comunitat anomenada 'Makers', formada per usuaris molt ben capacitats moguts per interessos no lucratius que estan fent canviar les regles del joc. Aviat, amb la fabricació digital personal i la generació i compartició de continguts de manera virtual, canviaran la manera de produir productes (i per tant de concebre'ls, transportar-los, utilitzar-los i de comerciar amb ells), tot fent possible el moviment que ja es considera com la 'Democratització de la producció'. Els algoritmes presentats pretenen mantenir un nivell d'abstracció elevat. 'Acció' i 'detecció' es tracten com a processos desacoblats internament, de manera que el sistema hagi de fer necessàriament un procés d'aprenentatge lògic. Més enllà de l'abast de la contribució de la present Tesi Doctoral, la intenció d'aquest treball és haver pogut aportar unes especificacions funcionals que podran ser posades a l'abast de la comunitat i que podran servir de llavor per permetre el desenvolupament de nous paradigmes de fabricació intel·ligent (iCAM) en veritables Arquitectures Àgils de Fabricaci

    City, politics, economics, culture…

    Get PDF
    The paper emphasizes the complexity of the city as the phenomenon of radicalized modernity and the need of its multidimensional study. Globalizational and transnational processes are setting the city as a complex intersection, which is gaining in economic and political significance well investigated from political economy perspective that is in big part immersed in globalization discourse. Faced with the problems and the impact of deregulation, decentralization and privatization city appears as a vital place of economic, political and cultural development. Understanding these issues requires contextualization. By avoiding excessive generalizations and too rigid dichotomies, local / urban politics show some of the major potentials of local/global network connections, which in combination with multitude actors in urban politics can mean innovative changes. From that point of view questions of the role of culture in such context are set: cultural elements are increasingly emerging as tool of political economy but also as representation of resistance and, the most importantly, as part of locals’ development projects. Finally, it seems that all problems about quality of life in city (or the claims for the right to the city) end up in cultural conflict where urban is place of its appearance or its clearest expression, and for that part, it appears as the place of transformation of politics
    corecore