42 research outputs found

    Temporal - spatial recognizer for multi-label data

    Get PDF
    Pattern recognition is an important artificial intelligence task with practical applications in many fields such as medical and species distribution. Such application involves overlapping data points which are demonstrated in the multi- label dataset. Hence, there is a need for a recognition algorithm that can separate the overlapping data points in order to recognize the correct pattern. Existing recognition methods suffer from sensitivity to noise and overlapping points as they could not recognize a pattern when there is a shift in the position of the data points. Furthermore, the methods do not implicate temporal information in the process of recognition, which leads to low quality of data clustering. In this study, an improved pattern recognition method based on Hierarchical Temporal Memory (HTM) is proposed to solve the overlapping in data points of multi- label dataset. The imHTM (Improved HTM) method includes improvement in two of its components; feature extraction and data clustering. The first improvement is realized as TS-Layer Neocognitron algorithm which solves the shift in position problem in feature extraction phase. On the other hand, the data clustering step, has two improvements, TFCM and cFCM (TFCM with limit- Chebyshev distance metric) that allows the overlapped data points which occur in patterns to be separated correctly into the relevant clusters by temporal clustering. Experiments on five datasets were conducted to compare the proposed method (imHTM) against statistical, template and structural pattern recognition methods. The results showed that the percentage of success in recognition accuracy is 99% as compared with the template matching method (Featured-Based Approach, Area-Based Approach), statistical method (Principal Component Analysis, Linear Discriminant Analysis, Support Vector Machines and Neural Network) and structural method (original HTM). The findings indicate that the improved HTM can give an optimum pattern recognition accuracy, especially the ones in multi- label dataset

    Quantification of Uncertainty with Adversarial Models

    Full text link
    Quantifying uncertainty is important for actionable predictions in real-world applications. A crucial part of predictive uncertainty quantification is the estimation of epistemic uncertainty, which is defined as an integral of the product between a divergence function and the posterior. Current methods such as Deep Ensembles or MC dropout underperform at estimating the epistemic uncertainty, since they primarily consider the posterior when sampling models. We suggest Quantification of Uncertainty with Adversarial Models (QUAM) to better estimate the epistemic uncertainty. QUAM identifies regions where the whole product under the integral is large, not just the posterior. Consequently, QUAM has lower approximation error of the epistemic uncertainty compared to previous methods. Models for which the product is large correspond to adversarial models (not adversarial examples!). Adversarial models have both a high posterior as well as a high divergence between their predictions and that of a reference model. Our experiments show that QUAM excels in capturing epistemic uncertainty for deep learning models and outperforms previous methods on challenging tasks in the vision domain

    Efficient Continual Learning:Approaches and Measures

    Get PDF

    The Probabilistic Active Shape Model: From Model Construction to Flexible Medical Image Segmentation

    Get PDF
    Automatic processing of three-dimensional image data acquired with computed tomography or magnetic resonance imaging plays an increasingly important role in medicine. For example, the automatic segmentation of anatomical structures in tomographic images allows to generate three-dimensional visualizations of a patient’s anatomy and thereby supports surgeons during planning of various kinds of surgeries. Because organs in medical images often exhibit a low contrast to adjacent structures, and because the image quality may be hampered by noise or other image acquisition artifacts, the development of segmentation algorithms that are both robust and accurate is very challenging. In order to increase the robustness, the use of model-based algorithms is mandatory, as for example algorithms that incorporate prior knowledge about an organ’s shape into the segmentation process. Recent research has proven that Statistical Shape Models are especially appropriate for robust medical image segmentation. In these models, the typical shape of an organ is learned from a set of training examples. However, Statistical Shape Models have two major disadvantages: The construction of the models is relatively difficult, and the models are often used too restrictively, such that the resulting segmentation does not delineate the organ exactly. This thesis addresses both problems: The first part of the thesis introduces new methods for establishing correspondence between training shapes, which is a necessary prerequisite for shape model learning. The developed methods include consistent parameterization algorithms for organs with spherical and genus 1 topology, as well as a nonrigid mesh registration algorithm for shapes with arbitrary topology. The second part of the thesis presents a new shape model-based segmentation algorithm that allows for an accurate delineation of organs. In contrast to existing approaches, it is possible to integrate not only linear shape models into the algorithm, but also nonlinear shape models, which allow for a more specific description of an organ’s shape variation. The proposed segmentation algorithm is evaluated in three applications to medical image data: Liver and vertebra segmentation in contrast-enhanced computed tomography scans, and prostate segmentation in magnetic resonance images

    Introduction to IND and recursive partitioning, version 1.0

    Get PDF
    This manual describes the IND package for learning tree classifiers from data. The package is an integrated C and C shell re-implementation of tree learning routines such as CART, C4, and various MDL and Bayesian variations. The package includes routines for experiment control, interactive operation, and analysis of tree building. The manual introduces the system and its many options, gives a basic review of tree learning, contains a guide to the literature and a glossary, lists the manual pages for the routines, and instructions on installation

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    Improved kernel methods for classification

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Deep Embedding Kernel

    Get PDF
    Kernel methods and deep learning are two major branches of machine learning that have achieved numerous successes in both analytics and artificial intelligence. While having their own unique characteristics, both branches work through mapping data to a feature space that is supposedly more favorable towards the given task. This dissertation addresses the strengths and weaknesses of each mapping method through combining them and forming a family of novel deep architectures that center around the Deep Embedding Kernel (DEK). In short, DEK is a realization of a kernel function through a newly deep architecture. The mapping in DEK is both implicit (like in kernel methods) and learnable (like in deep learning). Prior to DEK, we proposed a less advanced architecture called Deep Kernel for the tasks of classification and visualization. More recently, we integrate DEK with the novel Dual Deep Learning framework to model big unstructured data. Using DEK as a core component, we further propose two machine learning models: Deep Similarity-Enhanced K Nearest Neighbors (DSE-KNN) and Recurrent Embedding Kernel (REK). Both models have their mappings trained towards optimizing data instances\u27 neighborhoods in the feature space. REK is specifically designed for time series data. Experimental studies throughout the dissertation show that the proposed models have competitive performance to other commonly used and state-of-the-art machine learning models in their given tasks

    Influence of incentive policy in the alignment of business and information technology

    Get PDF
    Tese de Doutoramento (Tecnologias e Sistemas de Informação)Os executivos de topo de muitas das maiores empresas mundiais estão hoje conscientes de que as novas tecnologias estão a redefinir as cadeias de valor e que as suas empresas precisam de permanecer atentas para continuarem relevantes no mercado. Os negócios modernos precisam de articular as necessidades do negócio com tecnologias de informação (TI) inovadoras. De facto, a necessidade de um melhor alinhamento entre o negócio e as TI tem sido continuamente considerado como uma das maiores preocupações que executivos de topo de TI enfrentam. Esta preocupação é talvez apoiada na convicção, suportada em um número significativo de estudos, de que um melhor alinhamento pode influenciar positivamente o desempenho do negócio. Na verdade, este alinhamento é considerado uma das áreas mais importantes da governação das TI e a sua importância é reconhecida e abordada por alguns dos mais importantes normativos das TI, como o COBIT, o ITIL ou o TOGAF. Embora o alinhamento tenha sido abordado por muitos estudos no passado, a preocupação constante com ele na última década sugere que não tem havido progresso suficiente sobre esta questão. Por outro lado, o alinhamento é feito por pessoas. E, quanto mais as pessoas estiverem motivadas nas organizações, mais e melhor elas trabalham. A influência que os incentivos de alguns gestores têm no seu comportamento e, assim, na sua atividade e produtividade profissional tem sido bastante abordada na literatura. Na verdade, é habitual as empresas darem pacotes de incentivos aos seus gestores, desejavelmente concebidos para serem alinhados com os objetivos organizacionais. Este trabalho investigou a influência de políticas de incentivo na promoção de um melhor alinhamento. Além da revisão da literatura mais importante sobre estas duas áreas, foi proposto um novo modelo que relaciona o incentivo com o alinhamento do negócio e das TI. É proposto e aplicado um novo instrumento para medir o nível de incentivo de uma organização e também adaptado e aplicado um instrumento existente para medir o nível de alinhamento. Após algumas fases prévias, como pré-teste e teste piloto, os instrumentos foram aplicados na amostra completa, através duma plataforma de inquéritos online. A amostra, provida pela Informa Dun & Bradstreet, foi expandida com base na rede social LinkedIn, suportada no método "bola de neve”, que ajuda o estudo de populações difíceis de alcançar. Foram recolhidas respostas de mais de quatro centenas de gestores de negócio e TI, de mais de duas centenas de médias e grandes empresas portuguesas, representando, ao que se sabe, o inquérito mais vasto já feito em Portugal sobre alinhamento. O modelo, de componentes hierárquicas, foi estimado usando um modelo de equações estruturais (SEM) com a técnica dos mínimos quadrados parciais (PLS). A confiabilidade e validade do modelo de medida (reflexivo) foram garantidas depois de descartados alguns indicadores. A avaliação dos componentes de ordem superior do modelo (formativo) foi assegurada por uma sólida validação de conteúdo dos constructos “incentivo” e “alinhamento”. Os resultados principais são apresentados, discutidos e interpretados através de vários ângulos, respetivamente, a área funcional dos respondentes, o seu género, a sua geração, a atividade económica das empresas, por cada variável manifesta do incentivo e alinhamento e dimensão das empresas. Por fim, os resultados do modelo proposto são discutidos e interpretados. Ao propor uma explicação do alinhamento com uma única variável, o incentivo, este é talvez um dos modelos mais parcimoniosos do alinhamento apresentados até agora. Este estudo também permite suportar aquele que é, talvez, o seu maior contributo, que é facto do incentivo explicar a maior parte do alinhamento. Algumas recomendações para a prática e para investigação futura são ainda propostas.The chief executive officers from many of world’s largest companies are aware that new technologies are redefining value chains and that companies need to remain aware to remain relevant in the market. The modern businesses need to articulate business needs with innovative information technologies. In fact, business and IT alignment (BIA) has been continually considered as one of most important concerns that top IT executives face. This concern is probably supported on the conviction, sustained on a significant number of studies, that achieving a better alignment can positively influence business performance. Actually, this alignment is considered one of most important areas of IT governance and its importance is recognized and addressed by some of most important IT frameworks, as COBIT, ITIL or TOGAF. Although alignment has been focused by numerous researches in the past, the ongoing concern with it in the last decade suggests that there was not been sufficient progress in addressing this issue. Still, the allignment is made by people. And, the more people are motivated in organizations, the more and better they work. The influence that incentives have on managers behaviour and, thus, on their professional activity and productivity has been widely addressed in the literature. Indeed, it is a common practice among companies giving packages of incentives to their executives, desirably designed in order to be aligned with organization objectives. This work investigated the influence of incentive policies to promote a better alignment. Besides reviewing most important literature about these two areas, this study proposes a new model that relates the incentive with the alignment of business and IT. It proposed and applied a new instrument to measure the incentive maturity of an organization and it also adapted and applied an existing instrument to measure the alignment maturity. After some preceding phases, as pretesting and pilot testing, the instruments were administered on a full scale sample, through an online survey platform. The sample, provided by Informa Dun & Bradstreet, was expanded with the help of the social network LinkedIn, supported in the snowball method, which helps on the study of hard-to-reach populations. Responses were collected from more than four hundred business and IT managers, from more than two hundred medium-size and large Portuguese companies, representing, as far as is known, the wider survey ever done in Portugal about the alignment between business and IT. The model, a hierarchical component model, was estimated using a structural equation model (SEM) with partial least squares technique (PLS). The reliability and validity of the measurement model (reflective) were guaranteed, after some indicators have been discarded. The model assessment concerning the higher-order components (formative) was assured through robust content validity procedures of incentive and alignment constructs. The major findings are presented, discussed and interpreted by different angles, respectively, by the functional area of respondents, by respondents' gender, by respondents’ generation, by companies’ economic activity, by each one of the manifest variables of incentive and alignment and by companies’ size. Finally, the results of the proposed model are discussed and interpreted. By proposing an explanation of alignment with just one latent variable, the incentive, this is probably one of the most parsimonious models of alignment presented until now. The study also allows supporting the one that is perhaps its greatest contribution, which is the fact that the majority of the explanation of alignment is made by incentive. Some recommendations for practice and future research are also proposed
    corecore