29,316 research outputs found
Unsupervised Feature Learning through Divergent Discriminative Feature Accumulation
Unlike unsupervised approaches such as autoencoders that learn to reconstruct
their inputs, this paper introduces an alternative approach to unsupervised
feature learning called divergent discriminative feature accumulation (DDFA)
that instead continually accumulates features that make novel discriminations
among the training set. Thus DDFA features are inherently discriminative from
the start even though they are trained without knowledge of the ultimate
classification problem. Interestingly, DDFA also continues to add new features
indefinitely (so it does not depend on a hidden layer size), is not based on
minimizing error, and is inherently divergent instead of convergent, thereby
providing a unique direction of research for unsupervised feature learning. In
this paper the quality of its learned features is demonstrated on the MNIST
dataset, where its performance confirms that indeed DDFA is a viable technique
for learning useful features.Comment: Corrected citation formattin
Machine learning for outlier detection in medical imaging
Outlier detection is an important problem with diverse practical applications. In medical imaging, there are many diagnostic tasks that can be framed as outlier detection. Since pathologies can manifest in so many different ways, the goal is typically to learn from normal, healthy data and identify any deviations. Unfortunately, many outliers in the medical domain can be subtle and specific, making them difficult to detect without labelled examples. This thesis analyzes some of the nuances of medical data and the value of labels in this context. It goes on to propose several strategies for unsupervised learning. More specifically, these methods are designed to learn discriminative features from data of a single class. One approach uses divergent search to continually find different ways to partition the data and thereby accumulates a repertoire of features. The other proposed methods are based on a self-supervised task that distorts normal data to form a contrasting class. A network can then be trained to localize the irregularities and estimate the degree of foreign interference. This basic technique is further enhanced using advanced image editing to create more natural irregularities. Lastly, the same self-supervised task is repurposed for few-shot learning to create a framework for adaptive outlier detection. These proposed methods are able to outperform conventional strategies across a range of datasets including brain MRI, abdominal CT, chest X-ray, and fetal ultrasound data. In particular, these methods excel at detecting more subtle irregularities. This complements existing methods and aims to maximize benefit to clinicians by detecting fine-grained anomalies that can otherwise require intense scrutiny. Note that all approaches to outlier detection must accept some assumptions; these will affect which types of outliers can be detected. As such, these methods aim for broad generalization within the most medically relevant categories. Ultimately, the hope is to support clinicians and to focus their attention and efforts on the data that warrants further analysis.Open Acces
μμ λͺ¨λΈ: κ΅¬μ± ν¨ν΄ μμ± λ€νΈμν¬μ λ€μμ± νμμ ν΅ν μ΄λ―Έμ§ μ μ
νμλ
Όλ¬Έ (μμ¬)-- μμΈλνκ΅ λνμ : 곡과λν μ»΄ν¨ν°κ³΅νλΆ, 2019. 2. λ¬Έλ³λ‘.Divergent Search methods are devised to resolve the problem falling into a trap of local optima, an arch-enemy of stochastic optimization algorithms. Novelty Search and Surprise Search, inter alia, use the concept of {\it behavior} and explore behavior space defined by it, maintaining evolutionary divergence and they have shown great performance in this respect. Moreover, coupling novelty and surprise concept was designed based on ideas that those two algorithms search behavioral space in a different way. The combination of two algorithms can be viewed as multiobjective optimization algorithm, and this approach enhanced the performance than using one divergent search method only. Since several divergent search methods have outperformed existing stochastic optimization algorithms in recent studies of robotics, it has been applied to many other domains, such as robot morphology, artificial life and generating images. Particularly, the Innovation Engines applied Novelty Search to image generating method so as to create novel and interesting images. In this paper, we propose Imagination Model that adopts Novelty-Surprise Search which is the combination of Novelty and Surprise Search instead of pure Novelty Search, as an extension of Innovation Engine. Evolutionary algorithms using Novelty Search, Surprise Search, Novelty-Surprise Search are compared via well-trained deep neural networks defining the behaviors of individuals in terms of creating interesting images. Results of experiments indicate that Novelty-Surprise Search outperforms Novelty Search and Surprise Search even in image domainit searches and explores vast behavioral space more extensively than each search algorithm on its own.λ€μμ± κ²μ λ°©λ²μ νλ₯ μ μ΅μ ν μκ³ λ¦¬μ¦μ μ£Όμ μΈ μ§μ μ΅μ ν΄μ ν¨μ μ λΉ μ§λ λ¬Έμ λ₯Ό ν΄κ²°νκΈ° μν΄ κ³ μλμλ€. κ·Έμ€μμλ μ°Έμ ν¨ νμκ³Ό λλΌμ νμμ {\it νλ}μ΄λΌλ κ°λ
κ³Ό κ·Έ κ°λ
μ΄ μ μνλ νλ 곡κ°μ νμνλ©° μ§νμ λ€μμ±μ μ μ§νκ³ μ΄ μ μ μμ΄μ νλ₯ν μ±λ₯μ 보μ¬μ£Όμλ€. κ·ΈλΏλ§ μλλΌ λ λ€μμ± νμμ΄ μλ‘ λ€λ₯Έ λ°©μμΌλ‘ νλ 곡κ°μ νμνλ λ°μμ μ°©μνμ¬, μ°Έμ ν¨κ³Ό λλΌμμ κ²°ν©νλ μκ³ λ¦¬μ¦μ΄ μ€κ³λμλ€. λ μκ³ λ¦¬μ¦μ μ‘°ν©μ λ€λͺ©μ μ΅μ ν μκ³ λ¦¬μ¦μΌλ‘ κ°μ£Όν μ μλλ°, μ΄ μ κ·Ό λ°©μμ λ μ€ νλλ§μ λ€μμ± νμ λ°©λ²μ μ¬μ©ν λλ³΄λ€ μ±λ₯μ΄ κ°μ λ¨μ λ€μν μ°κ΅¬μμ 보μ¬μ£Όμλ€. μ΄μ²λΌ μ¬λ¬ λ€μμ± νμμ΄ κΈ°μ‘΄μ νλ₯ μ μ΅μ ν μκ³ λ¦¬μ¦μ λ°μ΄ λλ μ±λ₯μ 보μκΈ° λλ¬Έμ, λ‘λ΄ ννν, μΈκ³΅μλͺ
, μ΄λ―Έμ§ μμ±μ²λΌ λ€μν λΆμΌμ μμ©λμ΄μλ€. νΉν, νμ μμ§μ μλ‘μ°λ©΄μλ ν₯λ―Έλ‘μ΄ μ΄λ―Έμ§λ₯Ό μ°½μ‘°νκΈ° μν΄ μ΄λ―Έμ§ μμ± λ°©λ²μ μ°Έμ ν¨ νμμ μ μ©νλ€. μ΄μ λν΄ μ°λ¦¬λ μ΄ λ
Όλ¬Έμμ μμ λͺ¨λΈμ μ μνλ€. μ΄ μμ λͺ¨λΈμ νμ μμ§μ νμ₯μΌλ‘μ μμν μ°Έμ ν¨ νμ λμ μ°Έμ ν¨ νμκ³Ό λλΌμ νμμ κ²°ν©ν μ°Έμ ν¨-λλΌμ νμμ λμ
νλ€. μ°Έμ ν¨ νμ, λλΌμ νμ κ·Έλ¦¬κ³ μ°Έμ ν¨-λλΌμ νμμ μ¬μ©ν μ§ν μ°μ°μ μ΄λ―Έμ§ μμ±μ κ΄ν μΈ‘λ©΄μμ λΉκ΅νλ μ€νμ μ§ννλ©°, μ΄λ€μ λͺ¨λ μ¬μΈ΅ μΈκ³΅μ κ²½λ§μ ν΅ν΄ κ·Έλ€μ΄ μ¬μ©νλ νλμ΄λΌλ κ°λ
μ΄ μ μλλ€. μ€ν κ²°κ³Όλ₯Ό μ΄ν΄λ³΄λ©΄, μ°Έμ ν¨-λλΌμ νμμ λ¨μν μ°Έμ ν¨ νμμ΄λ λλΌμ νμ κ°κ°μ λ°λ‘λ°λ‘ μ¬μ©νλ κ²λ³΄λ€ λ λμ νλ 곡κ°μ λ κ΄λ²μνκ² νμνλ λͺ¨μ΅μ 보μ¬μ£Όμλ€. μ΄λ‘λΆν°, λ€λ₯Έ λΆμΌλΏ μλλΌ μ΄λ―Έμ§ μμ± μμμμλ μ°Έμ ν¨-λλΌμ νμμ΄ μ°Έμ ν¨ νμκ³Ό λλΌμ νμ κ°κ°μ λ°μ΄λλ μ±λ₯μ 보μΈλ€λ κ²μ νμΈνμλ€.Abstract i
Contents iii
List of Figures v
List of Tables vi
Chapter 1 Introduction 1
Chapter 2 Background 4
2.1 CPPN-NEAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Novelty Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Surprise Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Combining Novelty and Surprise Score . . . . . . . . . . . . . . . . . . . 7
2.5 Innovation Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 3 Methods 9
3.1 Image Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Behavioral Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Imagination Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 4 Experiments 13
4.1 Fitness Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Deep Neural Networks and Dataset . . . . . . . . . . . . . . . . . . . . . . 14
Chapter 5 Results 16
Chapter 6 Discussion 25
Chapter 7 Conclusion 27
Bibliography 29
μμ½ 33Maste
Searching for surprise
Inspired by the notion of surprise for unconventional discovery
in computational creativity, we introduce a general
search algorithm we name surprise search. Surprise search is
grounded in the divergent search paradigm and is fabricated
within the principles of metaheuristic (evolutionary) search.
The algorithm mimics the self-surprise cognitive process of
creativity and equips computational creators with the ability
to search for outcomes that deviate from the algorithmβs expected
behavior. The predictive model of expected outcomes
is based on historical trails of where the search has been and
some local information about the search space. We showcase
the basic steps of the algorithm via a problem solving (maze
navigation) and a generative art task. What distinguishes surprise
search from other forms of divergent search, such as the
search for novelty, is its ability to diverge not from earlier and
seen outcomes but rather from predicted and unseen points in
the creative domain considered.This work has been supported in part by the FP7 Marie Curie
CIG project AutoGameDesign (project no: 630665).peer-reviewe
Identifying divergent design thinking through the observable behavior of service design novices
Β© 2018, Springer Nature B.V. Design thinking holds the key to innovation processes, but is often difficult to detect because of its implicit nature. We undertook a study of novice designers engaged in team-based design exercises in order to explore the correlation between design thinking and designersβ physical (observable) behavior and to identify new, objective, design thinking identification methods. Our study addresses the topic by using data collection method of βthink aloudβ and data analysis method of βprotocol analysisβ along with the unconstrained concept generation environment. Collected data from the participants without service design experience were analyzed by open and selective coding. Through the research, we found correlations between physical activity and divergent thinking, and also identified physical behaviors that predict a designerβs transition to divergent thinking. We conclude that there are significant relations between designersβ design thinking and the behavioral features of their body and face. This approach opens possible new ways to undertake design process research and also design capability evaluation
- β¦