2,757 research outputs found

    Graphonomics and your Brain on Art, Creativity and Innovation : Proceedings of the 19th International Graphonomics Conference (IGS 2019 – Your Brain on Art)

    Get PDF
    [Italiano]: “Grafonomia e cervello su arte, creatività e innovazione”. Un forum internazionale per discutere sui recenti progressi nell'interazione tra arti creative, neuroscienze, ingegneria, comunicazione, tecnologia, industria, istruzione, design, applicazioni forensi e mediche. I contributi hanno esaminato lo stato dell'arte, identificando sfide e opportunità, e hanno delineato le possibili linee di sviluppo di questo settore di ricerca. I temi affrontati includono: strategie integrate per la comprensione dei sistemi neurali, affettivi e cognitivi in ambienti realistici e complessi; individualità e differenziazione dal punto di vista neurale e comportamentale; neuroaesthetics (uso delle neuroscienze per spiegare e comprendere le esperienze estetiche a livello neurologico); creatività e innovazione; neuro-ingegneria e arte ispirata dal cervello, creatività e uso di dispositivi di mobile brain-body imaging (MoBI) indossabili; terapia basata su arte creativa; apprendimento informale; formazione; applicazioni forensi. / [English]: “Graphonomics and your brain on art, creativity and innovation”. A single track, international forum for discussion on recent advances at the intersection of the creative arts, neuroscience, engineering, media, technology, industry, education, design, forensics, and medicine. The contributions reviewed the state of the art, identified challenges and opportunities and created a roadmap for the field of graphonomics and your brain on art. The topics addressed include: integrative strategies for understanding neural, affective and cognitive systems in realistic, complex environments; neural and behavioral individuality and variation; neuroaesthetics (the use of neuroscience to explain and understand the aesthetic experiences at the neurological level); creativity and innovation; neuroengineering and brain-inspired art, creative concepts and wearable mobile brain-body imaging (MoBI) designs; creative art therapy; informal learning; education; forensics

    Dynamic Honeypot Configuration for Programmable Logic Controller Emulation

    Get PDF
    Attacks on industrial control systems and critical infrastructure are on the rise. Important systems and devices like programmable logic controllers are at risk due to outdated technology and ad hoc security measures. To mitigate the threat, honeypots are deployed to gather data on malicious intrusions and exploitation techniques. While virtual honeypots mitigate the unreasonable cost of hardware-replicated honeypots, these systems often suffer from a lack of authenticity due to proprietary hardware and network protocols. In addition, virtual honeynets utilizing a proxy to a live device suffer from performance bottlenecks and limited scalability. This research develops an enhanced, application layer emulator capable of alleviating honeynet scalability and honeypot inauthenticity limitations. The proposed emulator combines protocol-agnostic replay with dynamic updating via a proxy. The result is a software tool which can be readily integrated into existing honeypot frameworks for improved performance. The proposed emulator is evaluated on traffic reduction on the back-end proxy device, application layer task accuracy, and byte-level traffic accuracy. Experiments show the emulator is able to successfully reduce the load on the proxy device by up to 98% for some protocols. The emulator also provides equal or greater accuracy over a design which does not use a proxy. At the byte level, traffic variation is statistically equivalent while task success rates increase by 14% to 90% depending on the protocol. Finally, of the proposed proxy synchronization algorithms, templock and its minimal variant are found to provide the best overall performance

    Investigating the Common Authorship of Signatures by Off-line Automatic Signature Verification without the Use of Reference Signatures

    Get PDF
    In automatic signature verification, questioned specimens are usually compared with reference signatures. In writer-dependent schemes, a number of reference signatures are required to build up the individual signer model while a writer-independent system requires a set of reference signatures from several signers to develop the model of the system. This paper addresses the problem of automatic signature verification when no reference signatures are available. The scenario we explore consists of a set of signatures, which could be signed by the same author or by multiple signers. As such, we discuss three methods which estimate automatically the common authorship of a set of off-line signatures. The first method develops a score similarity matrix, worked out with the assistance of duplicated signatures; the second uses a feature-distance matrix for each pair of signatures; and the last method introduces pre-classification based on the complexity of each signature. Publicly available signatures were used in the experiments, which gave encouraging results. As a baseline for the performance obtained by our approaches, we carried out a visual Turing Test where forensic and non-forensic human volunteers, carrying out the same task, performed less well than the automatic schemes

    Improving the Generalizability of Speech Emotion Recognition: Methods for Handling Data and Label Variability

    Full text link
    Emotion is an essential component in our interaction with others. It transmits information that helps us interpret the content of what others say. Therefore, detecting emotion from speech is an important step towards enabling machine understanding of human behaviors and intentions. Researchers have demonstrated the potential of emotion recognition in areas such as interactive systems in smart homes and mobile devices, computer games, and computational medical assistants. However, emotion communication is variable: individuals may express emotion in a manner that is uniquely their own; different speech content and environments may shape how emotion is expressed and recorded; individuals may perceive emotional messages differently. Practically, this variability is reflected in both the audio-visual data and the labels used to create speech emotion recognition (SER) systems. SER systems must be robust and generalizable to handle the variability effectively. The focus of this dissertation is on the development of speech emotion recognition systems that handle variability in emotion communications. We break the dissertation into three parts, according to the type of variability we address: (I) in the data, (II) in the labels, and (III) in both the data and the labels. Part I: The first part of this dissertation focuses on handling variability present in data. We approximate variations in environmental properties and expression styles by corpus and gender of the speakers. We find that training on multiple corpora and controlling for the variability in gender and corpus using multi-task learning result in more generalizable models, compared to the traditional single-task models that do not take corpus and gender variability into account. Another source of variability present in the recordings used in SER is the phonetic modulation of acoustics. On the other hand, phonemes also provide information about the emotion expressed in speech content. We discover that we can make more accurate predictions of emotion by explicitly considering both roles of phonemes. Part II: The second part of this dissertation addresses variability present in emotion labels, including the differences between emotion expression and perception, and the variations in emotion perception. We discover that it is beneficial to jointly model both the perception of others and how one perceives one’s own expression, compared to focusing on either one. Further, we show that the variability in emotion perception is a modelable signal and can be captured using probability distributions that describe how groups of evaluators perceive emotional messages. Part III: The last part of this dissertation presents methods that handle variability in both data and labels. We reduce the data variability due to non-emotional factors using deep metric learning and model the variability in emotion perception using soft labels. We propose a family of loss functions and show that by pairing examples that potentially vary in expression styles and lexical content and preserving the real-valued emotional similarity between them, we develop systems that generalize better across datasets and are more robust to over-training. These works demonstrate the importance of considering data and label variability in the creation of robust and generalizable emotion recognition systems. We conclude this dissertation with the following future directions: (1) the development of real-time SER systems; (2) the personalization of general SER systems.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147639/1/didizbq_1.pd

    Automatic intrapersonal variability modeling for offline signature augmentation

    Get PDF
    Orientador: Luiz Eduardo Soares de OliveiraCoorientadores: Robert Sabourin e Alceu de Souza Britto Jr..Tese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 19/07/2021Inclui referências: p. 93-102Área de concentração: Ciência da ComputaçãoResumo: Normalmente, em um cenario do mundo real, poucas assinaturas estao disponiveis para treinar um sistema de verificacao automatica de assinaturas (SVAA). Para resolver esse problema, diversas abordagens para a duplicacao de assinaturas estaticas foram propostas ao longo dos anos. Essas abordagens geram novas amostras de assinaturas sinteticas aplicando algumas transformacoes na imagem original da assinatura. Algumas delas geram amostras realistas, especialmente o duplicator. Este metodo utiliza um conjunto de parametros para modelar o comportamento do escritor (variabilidade do escritor) ao assinar. No entanto, esses parametros so empiricamente definidos. Este tipo de abordagem pode ser demorado e pode selecionar parametros que nao descrevem a real variabilidade do escritor. A principal hipotese desse trabalho e que a variabilidade do escritor observada no dominio da imagem tambem pode ser transferido para o dominio de caracteristicas. Portanto, este trabalho propoe um novo metodo para modelar automaticamente a variabilidade do escritor para a posterior duplicacao de assinaturas no dominio de imagem (duplicator) e dominio de caracteristicas (filtro Gaussiano e variacao do metodo de Knop). Este trabalho tambem propoe um novo metodo de duplicacao de assinaturas estaticas, que gera as amostras sinteticas diretamente no dominio de caracteristicas usando um filtro Gaussiano. Alem disso, uma nova abordagem para avaliar a qualidade de amostras sinteticas no dominio de caracteristicas e apresentada. As limitacoes e vantagens de ambas as abordagens de duplicacao de assinaturas tambem sao exploradas. Alem de usar a nova abordagem para avaliar a qualidade das amostras, o desempenho de um SVAA e avaliado usando as amostras e tres bases de assinaturas estaticas bem conhecidas: a GPDS-300, a MCYT-75 e a CEDAR. Para a mais utilizada, GPDS-300, quando o classificador SVM foi treinando com somente uma assinatura genuina por escritor, ele obteve um Equal Error Rate (EER) de 5,71%. Quando o classificador tambem utilizou as amostras sinteticas geradas no dominio de imagem, o EER caiu para 1,08%. Quando o classificador foi treinado com as amostras geradas pelo filtro Gaussiano, o EER caiu para 1,04%.Abstract: Normally, in a real-world scenario, there are few signatures available to train an automatic signature verification system (ASVS). To address this issue, several offline signature duplication approaches have been proposed along the years. These approaches generate a new synthetic signature sample applying some transformations in the original signature image. Some of them generate realistic samples, specially the duplicator. This method uses a set of parameters to model the writer's behavior (writer variability) during the signing act. However, these parameters are empirically defined. This kind of approach can be time consuming and can select parameters that do not describe the real writer variability. The main hypothesis of this work is that the writer variability observed in the image space can be transferred to the feature space as well. Therefore, this work proposes a new method to automatically model the writer variability for further signature duplication in the image (duplicator) and the feature space (Gaussian filter and a variation of Knop's method). This work also proposes a new offline signature duplication method, which directly generates the synthetic samples in the feature space using a Gaussian filter. Furthermore, a new approach to assess the quality of the synthetic samples in the feature space is introduced. The limitations and advantages of both signature augmentation approaches are also explored. Despite using the new approach to assess the quality of the samples, the performance of an ASVS was assessed using them and three well-known offline signature datasets: GPDS-300, MCYT-75, and CEDAR. For the most used one, GPDS-300, when the SVM classifier was trained with only one genuine signature per writer, it achieved an Equal Error Rate (EER) of 5.71%. When the classifier also was trained with the synthetic samples generated in the image space, the EER dropped to 1.08%. When the classifier was trained using the synthetic samples generated by the Gaussian filter, the EER dropped to 1.04%

    Bioinformatics applied to human genomics and proteomics: development of algorithms and methods for the discovery of molecular signatures derived from omic data and for the construction of co-expression and interaction networks

    Get PDF
    [EN] The present PhD dissertation develops and applies Bioinformatic methods and tools to address key current problems in the analysis of human omic data. This PhD has been organised by main objectives into four different chapters focused on: (i) development of an algorithm for the analysis of changes and heterogeneity in large-scale omic data; (ii) development of a method for non-parametric feature selection; (iii) integration and analysis of human protein-protein interaction networks and (iv) integration and analysis of human co-expression networks derived from tissue expression data and evolutionary profiles of proteins. In the first chapter, we developed and tested a new robust algorithm in R, called DECO, for the discovery of subgroups of features and samples within large-scale omic datasets, exploring all feature differences possible heterogeneity, through the integration of both data dispersion and predictor-response information in a new statistic parameter called h (heterogeneity score). In the second chapter, we present a simple non-parametric statistic to measure the cohesiveness of categorical variables along any quantitative variable, applicable to feature selection in all types of big data sets. In the third chapter, we describe an analysis of the human interactome integrating two global datasets from high-quality proteomics technologies: HuRI (a human protein-protein interaction network generated by a systematic experimental screening based on Yeast-Two-Hybrid technology) and Cell-Atlas (a comprehensive map of subcellular localization of human proteins generated by antibody imaging). This analysis aims to create a framework for the subcellular localization characterization supported by the human protein-protein interactome. In the fourth chapter, we developed a full integration of three high-quality proteome-wide resources (Human Protein Atlas, OMA and TimeTree) to generate a robust human co-expression network across tissues assigning each human protein along the evolutionary timeline. In this way, we investigate how old in evolution and how correlated are the different human proteins, and we place all them in a common interaction network. As main general comment, all the work presented in this PhD uses and develops a wide variety of bioinformatic and statistical tools for the analysis, integration and enlighten of molecular signatures and biological networks using human omic data. Most of this data corresponds to sample cohorts generated in recent biomedical studies on specific human diseases

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    Workshop on Modelling of Objects, Components, and Agents, Aarhus, Denmark, August 27-28, 2001

    Get PDF
    This booklet contains the proceedings of the workshop Modelling of Objects, Components, and Agents (MOCA'01), August 27-28, 2001. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark and the "Theoretical Foundations of Computer Science" Group at the University of Hamburg, Germany. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop01
    corecore