167 research outputs found

    History of art paintings through the lens of entropy and complexity

    Full text link
    Art is the ultimate expression of human creativity that is deeply influenced by the philosophy and culture of the corresponding historical epoch. The quantitative analysis of art is therefore essential for better understanding human cultural evolution. Here we present a large-scale quantitative analysis of almost 140 thousand paintings, spanning nearly a millennium of art history. Based on the local spatial patterns in the images of these paintings, we estimate the permutation entropy and the statistical complexity of each painting. These measures map the degree of visual order of artworks into a scale of order-disorder and simplicity-complexity that locally reflects qualitative categories proposed by art historians. The dynamical behavior of these measures reveals a clear temporal evolution of art, marked by transitions that agree with the main historical periods of art. Our research shows that different artistic styles have a distinct average degree of entropy and complexity, thus allowing a hierarchical organization and clustering of styles according to these metrics. We have further verified that the identified groups correspond well with the textual content used to qualitatively describe the styles, and that the employed complexity-entropy measures can be used for an effective classification of artworks.Comment: 10 two-column pages, 5 figures; accepted for publication in PNAS [supplementary information available at http://www.pnas.org/highwire/filestream/824089/field_highwire_adjunct_files/0/pnas.1800083115.sapp.pdf

    Kolmogorov compression complexity may differentiate different schools of Orthodox iconography

    Get PDF
    The complexity in the styles of 1200 Byzantine icons painted between 13th and 16th from Greece, Russia and Romania was investigated through the Kolmogorov algorithmic information theory. The aim was to identify specific quantitative patterns which define the key characteristics of the three different painting schools. Our novel approach using the artificial surface images generated with Inverse FFT and the Midpoint Displacement (MD) algorithms, was validated by comparison of results with eight fractal and non-fractal indices. From the analyzes performed, normalized Kolmogorov compression complexity (KC) proved to be the best solution because it had the best complexity pattern differentiations, is not sensitive to the image size and the least affected by noise. We conclude that normalized KC methodology does offer capability to differentiate the icons within a School and amongst the three Schools

    Hierarchical Image Descriptions for Classification and Painting

    Get PDF
    The overall argument this thesis makes is that topological object structures captured within hierarchical image descriptions are invariant to depictive styles and offer a level of abstraction found in many modern abstract artworks. To show how object structures can be extracted from images, two hierarchical image descriptions are proposed. The first of these is inspired by perceptual organisation; whereas, the second is based on agglomerative clustering of image primitives. This thesis argues the benefits and drawbacks of each image description and empirically show why the second is more suitable in capturing object strucutures. The value of graph theory is demonstrated in extracting object structures, especially from the second type of image description. User interaction during the structure extraction process is also made possible via an image hierarchy editor. Two applications of object structures are studied in depth. On the computer vision side, the problem of object classification is investigated. In particular, this thesis shows that it is possible to classify objects regardless of their depictive styles. This classification problem is approached using a graph theoretic paradigm; by encoding object structures as feature vectors of fixed lengths, object classification can then be treated as a clustering problem in structural feature space and that actual clustering can be done using conventional machine learning techniques. The benefits of object structures in computer graphics are demonstrated from a Non-Photorealistic Rendering (NPR) point of view. In particular, it is shown that topological object structures deliver an appropriate degree of abstraction that often appears in well-known abstract artworks. Moreover, the value of shape simplification is demonstrated in the process of making abstract art. By integrating object structures and simple geometric shapes, it is shown that artworks produced in child-like paintings and from artists such as Wassily Kandinsky, Joan Miro and Henri Matisse can be synthesised and by doing so, the current gamut of NPR styles is extended. The whole process of making abstract art is built into a single piece of software with intuitive GUI.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Reflections on the four facets of symmetry: how physics exemplifies rational thinking

    Get PDF
    In contemporary theoretical physics, the powerful notion of symmetry stands for a web of intricate meanings among which I identify four clusters associated with the notion of transformation, comprehension, invariance and projection. While their interrelations are examined closely, these four facets of symmetry are scrutinised one after the other in great detail. This decomposition allows us to examine closely the multiple different roles symmetry plays in many places in physics. Furthermore, some connections with others disciplines like neurobiology, epistemology, cognitive sciences and, not least, philosophy are proposed in an attempt to show that symmetry can be an organising principle also in these fields

    Wavelet and Multiscale Methods

    Get PDF
    Various scientific models demand finer and finer resolutions of relevant features. Paradoxically, increasing computational power serves to even heighten this demand. Namely, the wealth of available data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information leads to tasks that are not tractable by standard numerical techniques. The last decade has seen the emergence of several new computational methodologies to address this situation. Their common features are the nonlinearity of the solution methods as well as the ability of separating solution characteristics living on different length scales. Perhaps the most prominent examples lie in multigrid methods and adaptive grid solvers for partial differential equations. These have substantially advanced the frontiers of computability for certain problem classes in numerical analysis. Other highly visible examples are: regression techniques in nonparametric statistical estimation, the design of universal estimators in the context of mathematical learning theory and machine learning; the investigation of greedy algorithms in complexity theory, compression techniques and encoding in signal and image processing; the solution of global operator equations through the compression of fully populated matrices arising from boundary integral equations with the aid of multipole expansions and hierarchical matrices; attacking problems in high spatial dimensions by sparse grid or hyperbolic wavelet concepts. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computation and to promote the exchange of ideas emerging in various disciplines

    The emergence of language as a function of brain-hemispheric feedback

    Get PDF
    This text posits the emergence of language as a function of brain-hemispheric feedback, where “emergence” refers to the generation of complex patterns from relatively simple interactions, “language” refers to an abstraction-based and representational-recombinatorial-recursive mapping-signaling system, “function” refers to an input-output relationship described by fractal algorithms, “brain-hemispheric” refers to complementary (approach-abstraction / avoidance-gestalt) cognitive modules, and “feedback” refers to self-regulation driven by neural inhibition and recruitment. The origin of language marks the dawn of human self-awareness and culture, and is thus a matter of fundamental and cross-disciplinary interest. This text is a synthesized research essay that constructs its argument by drawing diverse scholarly voices into a critical, cross-disciplinary intertextual narrative. While it does not report any original empirical findings, it harnesses those made by others to offer a tentative, partial solution—one that can later be altered and expanded—to a problem that has occupied thinkers for centuries. The research contained within this text is preceded by an introductory Section 1 that contextualizes the problem of the origin of language. Section 2 details the potential of evolutionary theory for addressing the problem, and the reasons for the century-long failure of linguistics to take advantage of that potential. Section 3 reviews the history of the discovery of brain lateralization, as well as its behavioral and structural characteristics. Section 4 discusses evolutionary evidence and mechanisms in terms of increasing adaptive complexity and intelligence, in general, and tool use, in particular. Section 5 combines chaos theory, brain science, and semiotics to propose that, after the neotenic acquisition of contingency-based abstraction, language emerged as a feedback interaction between the left-hemisphere abstract word and the right-hemisphere gestalt image. I conclude that the model proposed here might be a valuable tool for understanding, organizing, and relating data and ideas concerning human evolution, language, culture, and psychology. I recommend, of course, that I present this text to the scholarly community for criticism, and that I continue to gather and collate relevant data and ideas, in order to prepare its next iteration

    Grounding for a computational model of place

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Text printed 2 columns per page.Includes bibliographical references (leaves 66-70).Places are spatial locations that have been given meaning by human experience. The sense of a place is it's support for experiences and the emotional responses associated with them. This sense provides direction and focus for our daily lives. Physical maps and their electronic decedents deconstruct places into discrete data and require user interpretation to reconstruct the original sense of place. Is it possible to create maps that preserve this sense of place and successfully communicate it to the user? This thesis presents a model, and an application upon that model, that captures sense of place for translation, rather then requires the user to recreate it from disparate data. By grounding a human place-sense for machine interpretation, new presentations of space can be presented that more accurately mirror human cognitive conceptions. By using measures of semantic distance a user can observe the proximity of place not only in distance but also by context or association. Applications built upon this model can then construct representations that show places that are similar in feeling or reasonable destinations given the user's current location.(cont.) To accomplish this, the model attempts to understand place in the context a human might by using commonsense reasoning to analyze textual descriptions of place, and implicit statements of support for the role of these places in natural activity. It produces a semantic description of a place in terms of human action and emotion. Representations built upon these descriptions can offer powerful changes in the cognitive processing of space.Matthew Curtis Hockenberry.S.M

    AutoGraff: towards a computational understanding of graffiti writing and related art forms.

    Get PDF
    The aim of this thesis is to develop a system that generates letters and pictures with a style that is immediately recognizable as graffiti art or calligraphy. The proposed system can be used similarly to, and in tight integration with, conventional computer-aided geometric design tools and can be used to generate synthetic graffiti content for urban environments in games and in movies, and to guide robotic or fabrication systems that can materialise the output of the system with physical drawing media. The thesis is divided into two main parts. The first part describes a set of stroke primitives, building blocks that can be combined to generate different designs that resemble graffiti or calligraphy. These primitives mimic the process typically used to design graffiti letters and exploit well known principles of motor control to model the way in which an artist moves when incrementally tracing stylised letter forms. The second part demonstrates how these stroke primitives can be automatically recovered from input geometry defined in vector form, such as the digitised traces of writing made by a user, or the glyph outlines in a font. This procedure converts the input geometry into a seed that can be transformed into a variety of calligraphic and graffiti stylisations, which depend on parametric variations of the strokes
    corecore