476 research outputs found

    History of art paintings through the lens of entropy and complexity

    Full text link
    Art is the ultimate expression of human creativity that is deeply influenced by the philosophy and culture of the corresponding historical epoch. The quantitative analysis of art is therefore essential for better understanding human cultural evolution. Here we present a large-scale quantitative analysis of almost 140 thousand paintings, spanning nearly a millennium of art history. Based on the local spatial patterns in the images of these paintings, we estimate the permutation entropy and the statistical complexity of each painting. These measures map the degree of visual order of artworks into a scale of order-disorder and simplicity-complexity that locally reflects qualitative categories proposed by art historians. The dynamical behavior of these measures reveals a clear temporal evolution of art, marked by transitions that agree with the main historical periods of art. Our research shows that different artistic styles have a distinct average degree of entropy and complexity, thus allowing a hierarchical organization and clustering of styles according to these metrics. We have further verified that the identified groups correspond well with the textual content used to qualitatively describe the styles, and that the employed complexity-entropy measures can be used for an effective classification of artworks.Comment: 10 two-column pages, 5 figures; accepted for publication in PNAS [supplementary information available at http://www.pnas.org/highwire/filestream/824089/field_highwire_adjunct_files/0/pnas.1800083115.sapp.pdf

    A precise bare simulation approach to the minimization of some distances. Foundations

    Full text link
    In information theory -- as well as in the adjacent fields of statistics, machine learning, artificial intelligence, signal processing and pattern recognition -- many flexibilizations of the omnipresent Kullback-Leibler information distance (relative entropy) and of the closely related Shannon entropy have become frequently used tools. To tackle corresponding constrained minimization (respectively maximization) problems by a newly developed dimension-free bare (pure) simulation method, is the main goal of this paper. Almost no assumptions (like convexity) on the set of constraints are needed, within our discrete setup of arbitrary dimension, and our method is precise (i.e., converges in the limit). As a side effect, we also derive an innovative way of constructing new useful distances/divergences. To illustrate the core of our approach, we present numerous examples. The potential for widespread applicability is indicated, too; in particular, we deliver many recent references for uses of the involved distances/divergences and entropies in various different research fields (which may also serve as an interdisciplinary interface)

    Fuzzy Sets, Fuzzy Logic and Their Applications

    Get PDF
    The present book contains 20 articles collected from amongst the 53 total submitted manuscripts for the Special Issue “Fuzzy Sets, Fuzzy Loigic and Their Applications” of the MDPI journal Mathematics. The articles, which appear in the book in the series in which they were accepted, published in Volumes 7 (2019) and 8 (2020) of the journal, cover a wide range of topics connected to the theory and applications of fuzzy systems and their extensions and generalizations. This range includes, among others, management of the uncertainty in a fuzzy environment; fuzzy assessment methods of human-machine performance; fuzzy graphs; fuzzy topological and convergence spaces; bipolar fuzzy relations; type-2 fuzzy; and intuitionistic, interval-valued, complex, picture, and Pythagorean fuzzy sets, soft sets and algebras, etc. The applications presented are oriented to finance, fuzzy analytic hierarchy, green supply chain industries, smart health practice, and hotel selection. This wide range of topics makes the book interesting for all those working in the wider area of Fuzzy sets and systems and of fuzzy logic and for those who have the proper mathematical background who wish to become familiar with recent advances in fuzzy mathematics, which has entered to almost all sectors of human life and activity

    New Directions for Contact Integrators

    Get PDF
    Contact integrators are a family of geometric numerical schemes which guarantee the conservation of the contact structure. In this work we review the construction of both the variational and Hamiltonian versions of these methods. We illustrate some of the advantages of geometric integration in the dissipative setting by focusing on models inspired by recent studies in celestial mechanics and cosmology.Comment: To appear as Chapter 24 in GSI 2021, Springer LNCS 1282

    Sticky Flavors

    Full text link
    The Fr\'echet mean, a generalization to a metric space of the expectation of a random variable in a vector space, can exhibit unexpected behavior for a wide class of random variables. For instance, it can stick to a point (more generally to a closed set) under resampling: sample stickiness. It can stick to a point for topologically nearby distributions: topological stickiness, such as total variation or Wasserstein stickiness. It can stick to a point for slight but arbitrary perturbations: perturbation stickiness. Here, we explore these and various other flavors of stickiness and their relationship in varying scenarios, for instance on CAT(Îș\kappa) spaces, Îș∈R\kappa\in \mathbb{R}. Interestingly, modulation stickiness (faster asymptotic rate than n\sqrt{n}) and directional stickiness (a generalization of moment stickiness from the literature) allow for the development of new statistical methods building on an asymptotic fluctuation, where, due to stickiness, the mean itself features no asymptotic fluctuation. Also, we rule out sticky flavors on manifolds in scenarios with curvature bounds

    Leaf Venation Networks

    Get PDF

    An exploratory analysis of large health cohort study using Bayesian networks

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2006.Includes bibliographical references (p. 91-98).Large health cohort studies are among the most effective ways in studying the causes, treatments and outcomes of diseases by systematically collecting a wide range of data over long periods. The wealth of data in such studies may yield important results in addition to the already numerous findings, especially when subjected to newer analytical methods. Bayesian Networks (BN) provide a relatively new method of representing uncertain relationships among variables, using the tools of probability and graph theory, and have been widely used in analyzing dependencies and the interplay between variables. We used BN to perform an exploratory analysis on a rich collection of data from one large health cohort study, the Nurses' Health Study (NHS), with the focus on breast cancer. We explored the data from the NHS using BN to look for breast cancer risk factors, including a group of Single Nucleotide Polymorphisms (SNP). We found no association between the SNPs and breast cancer, but found a dependency between clomid and breast cancer. We evaluated clomid as a potential riskfactor after matching on age and number of children. Our results showed for clomid an increased risk of estrogen receptor positive breast cancer (odds ratio 1.52, 95% CI 1.11-2.09) and a decreased risk of estrogen receptor negative breast cancer (odds ratio 0.46, 95% CI 0.22-0.97).(cont.) We developed breast cancer risk models using BN. We trained models on 75% of the data, and evaluated them on the remaining. Because of the clinical importance of predicting risks for Estrogen Receptor positive and Progesterone Receptor positive breast cancer, we focused on this specific type of breast cancer to predict two-year, four-year, and six-year risks. The concordance statistics of the prediction results on test sets are 0.70 (95% CI: 0.67-0.74), 0.68 (95% CI: 0.64-0.72), and 0.66 (95% CI: 0.62-0.69) for two, four, and six year models, respectively. We also evaluated the calibration performance of the models, and applied a filter to the output to improve the linear relationship between predicted and observed risks using Agglomerative Information Bottleneck clustering without sacrificing much discrimination performance.by Delin Shen.Ph.D
    • 

    corecore