6 research outputs found

    The e-Volving Picturebook: Examining the Impact of New e-Media/Technologies On Its Form, Content and Function (And on the Child Reader)

    Get PDF
    The technology of the codex book and the habit of reading appear to be under attack currently for a variety of reasons explored in the Introduction of this Dissertation. One natural response to attack is a resulting effort to adapt in a bid to survive. NoĂ«l Carroll, leading American philosopher in the contemporary philosophy of art, touches on this concept in his discussion of the evolution of a new medium in his article, “Medium Specificity Arguments and Self-Consciously Invented Arts: Film, Video, and Photography,” from his Cambridge University Press 1996 text, Theorizing the Moving Image. Carroll proposes that any new medium undergoes phases of development (and I include new technology under that umbrella)). After examining Carroll’s theory this Dissertation attempts to apply it to the Children’s Picturebook Field, exploring the hypothesis that the published children’s narrative does evolve, has already evolved historically in response to other mediums/technologies, and is currently “e-volving” in response to emerging “e-media.” This discussion examines ways new media (particularly emerging e-media) affect the published children’s narrative form, content, and function (with primary focus on the picturebook form), and includes some examination of the response of the child reader to those changes. Chapter One explores the formation of the question, its value, and reviews available literature. Chapter Two compares the effects of an older sub-genre, the paper-engineered picturebook, with those of emerging e-picturebooks. Chapter Three compares the Twentieth Century Artist’s Book to picturebooks created by select past and current picturebook creators. Chapter Four first considers the shifting cultural mindset of Western Culture from a linear, word-based outlook to the non-linear, more visual approach fostered by the World Wide Web and supporting “screen” technologies; then identifies and examines current changes in form, content and function of the designed picturebooks that are developing “on the page” within the constraints of the codex book format. The Dissertation concludes with a review of Leonard Shlain’s 1998 text, The Alphabet Versus the Goddess: The Conflict Between Word and Image, using it as a departure point for final observations regarding unique strengths of the children’s picturebook as a learning tool for young children

    On the Evolution of the Heavenly Spheres: An Enactive Approach to Cosmography

    Get PDF
    The ability to view the world from multiple perspectives is essential for tackling complex, interconnected challenges. Yet conventional academic structures are designed to produce knowledge through ever-increasing specialization and compartmentalization. This fragmentation is often reinforced by tacit dualistic assumptions that prioritize linear thinking and abstract ways of knowing. Though the need for integrated approaches has been widely acknowledged, effective techniques for transcending disciplinary boundaries remain elusive. This thesis describes a practical strategy that uses immersive visualizations to cultivate transdisciplinary perspectives. It develops an enactive approach to cosmography, contending that processes of visualizing and interpreting the cosmos iteratively shape ‘views’ of the ‘world.’ The archetypal trope of the heavenly sphere is examined to demonstrate the significance of its interpretations in this history of ideas. Action research and mixed methods are employed to elucidate the theoretical considerations, cultural relevance, and practical consequences of this approach. The study begins with an investigation into the recurring appearance of the heavenly sphere across time, in which its embodied origins, metaphorical influence, and material embodiments are considered. Particular attention is given to how cosmographic tools and techniques have facilitated imaginary ‘flights’ through the heavens, from the ecstatic bird’s eye view of the shaman to the ‘Archimedean point’ of modern science. It then examines how these cosmographic practices have shaped cosmological beliefs and paradigmatic assumptions. Next, the practical utility of this approach is demonstrated through the development of cosmographic hermeneutics, a technique using visual heuristics to interpret cosmic models from transdisciplinary world views. Finally, the performative practice of cosmotroping is described, in which cosmographic hermeneutics are applied to re-imagine the ancient dream of the transcendent ‘cosmic journey’ within immersive vision theaters. This study concludes that the re-emergence of the heavenly sphere within the contemporary Digital Universe Atlas provides a leverage point for illuminating the complexity of knowledge production processes. It is claimed that this research has produced a practical strategy for demonstrating that the ultimate Archimedean point is the ability to recognize the limits of our own knowledge, a crucial first step in cultivating much-needed multi-perspectival and paradoxical spherical thinking

    Comments of the Cordell Institute for Policy in Medicine & Law at Washington University in St. Louis

    Get PDF
    The Federal Trade Commission—with its broad, independent grant of authority and statutory mandate to identify and prevent unfair and deceptive trade practices—is uniquely situated to prevent and remedy unfair and deceptive data privacy and data security practices. In an increasingly digitized world, data collection, processing, and transfer have become integral to market interactions. Our personal and commercial experiences are now mediated by powerful, information-intensive firms who hold the power to shape what consumers see, how they interact, which options are available to them, and how they make decisions. That power imbalance exposes consumers and leaves them all vulnerable. We all share data concerning ourselves with these platforms, often unwittingly, and we leave ourselves at the risk of their manipulation and control. The Commission envisions “[a] vibrant economy fueled by fair competition and an empowered, informed public.”1 But, this vision cannot be realized in the absence of meaningful consumer trust. Trust is the oxygen necessary for consumer choice to survive. Where trust is present, consumers are empowered to invest in companies and share their data knowing they are not going to be betrayed, manipulated, deceived, or treated unfairly. But where trust is weakened or absent, the marketplace breaks down and becomes a fertile ground for the development of market failures that are contrary to the interests of consumers and competition. Recognizing the importance of trust in digital markets, our comments are organized around three arguments: (i) commercial surveillance is the correct label for the data practices observed in the market; (ii) notice and choice, centered around the fiction of consumer consent, has failed as a regulatory regime; and (iii) the Commission should ground its future data privacy rules in concepts of trust, loyalty, and relational vulnerability

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Phylogenetics in the Genomic Era

    Get PDF
    Molecular phylogenetics was born in the middle of the 20th century, when the advent of protein and DNA sequencing offered a novel way to study the evolutionary relationships between living organisms. The first 50 years of the discipline can be seen as a long quest for resolving power. The goal – reconstructing the tree of life – seemed to be unreachable, the methods were heavily debated, and the data limiting. Maybe for these reasons, even the relevance of the whole approach was repeatedly questioned, as part of the so-called molecules versus morphology debate. Controversies often crystalized around long-standing conundrums, such as the origin of land plants, the diversification of placental mammals, or the prokaryote/eukaryote divide. Some of these questions were resolved as gene and species samples increased in size. Over the years, molecular phylogenetics has gradually evolved from a brilliant, revolutionary idea to a mature research field centred on the problem of reliably building trees. This logical progression was abruptly interrupted in the late 2000s. High-throughput sequencing arose and the field suddenly moved into something entirely different. Access to genome-scale data profoundly reshaped the methodological challenges, while opening an amazing range of new application perspectives. Phylogenetics left the realm of systematics to occupy a central place in one of the most exciting research fields of this century – genomics. This is what this book is about: how we do trees, and what we do with trees, in the current phylogenomic era. One obvious, practical consequence of the transition to genome-scale data is that the most widely used tree-building methods, which are based on probabilistic models of sequence evolution, require intensive algorithmic optimization to be applicable to current datasets. This problem is considered in Part 1 of the book, which includes a general introduction to Markov models (Chapter 1.1) and a detailed description of how to optimally design and implement Maximum Likelihood (Chapter 1.2) and Bayesian (Chapter 1.4) phylogenetic inference methods. The importance of the computational aspects of modern phylogenomics is such that efficient software development is a major activity of numerous research groups in the field. We acknowledge this and have included seven "How to" chapters presenting recent updates of major phylogenomic tools – RAxML (Chapter 1.3), PhyloBayes (Chapter 1.5), MACSE (Chapter 2.3), Bgee (Chapter 4.3), RevBayes (Chapter 5.2), Beagle (Chapter 5.4), and BPP (Chapter 5.6). Genome-scale data sets are so large that statistical power, which had been the main limiting factor of phylogenetic inference during previous decades, is no longer a major issue. Massive data sets instead tend to amplify the signal they deliver – be it biological or artefactual – so that bias and inconsistency, instead of sampling variance, are the main problems with phylogenetic inference in the genomic era. Part 2 covers the issues of data quality and model adequacy in phylogenomics. Chapter 2.1 provides an overview of current practice and makes recommendations on how to avoid the more common biases. Two chapters review the challenges and limitations of two key steps of phylogenomic analysis pipelines, sequence alignment (Chapter 2.2) and orthology prediction (Chapter 2.4), which largely determine the reliability of downstream inferences. The performance of tree building methods is also the subject of Chapter 2.5, in which a new approach is introduced to assess the quality of gene trees based on their ability to correctly predict ancestral gene order. Analyses of multiple genes typically recover multiple, distinct trees. Maybe the biggest conceptual advance induced by the phylogenetic to phylogenomic transition is the suggestion that one should not simply aim to reconstruct “the” species tree, but rather to be prepared to make sense of forests of gene trees. Chapter 3.1 reviews the numerous reasons why gene trees can differ from each other and from the species tree, and what the implications are for phylogenetic inference. Chapter 3.2 focuses on gene trees/species trees reconciliation methods that account for gene duplication/loss and horizontal gene transfer among lineages. Incomplete lineage sorting is another major source of phylogenetic incongruence among loci, which recently gained attention and is covered by Chapter 3.3. Chapter 3.4 concludes this part by taking a user’s perspective and examining the pros and cons of concatenation versus separate analysis of gene sequence alignments. Modern genomics is comparative and phylogenetic methods are key to a wide range of questions and analyses relevant to the study of molecular evolution. This is covered by Part 4. We argue that genome annotation, either structural or functional, can only be properly achieved in a phylogenetic context. Chapters 4.1 and 4.2 review the power of these approaches and their connections with the study of gene function. Molecular substitution rates play a key role in our understanding of the prevalence of nearly neutral versus adaptive molecular evolution, and the influence of species traits on genome dynamics (Chapter 4.4). The analysis of substitution rates, and particularly the detection of positive selection, requires sophisticated methods and models of coding sequence evolution (Chapter 4.5). Phylogenomics also offers a unique opportunity to explore evolutionary convergence at a molecular level, thus addressing the long-standing question of predictability versus contingency in evolution (Chapter 4.6). The development of phylogenomics, as reviewed in Parts 1 through 4, has resulted in a powerful conceptual and methodological corpus, which is often reused for addressing problems of interest to biologists from other fields. Part 5 illustrates this application potential via three selected examples. Chapter 5.1 addresses the link between phylogenomics and palaeontology; i.e., how to optimally combine molecular and fossil data for estimating divergence times. Chapter 5.3 emphasizes the importance of the phylogenomic approach in virology and its potential to trace the origin and spread of infectious diseases in space and time. Finally, Chapter 5.5 recalls why phylogenomic methods and the multi-species coalescent model are key in addressing the problem of species delimitation – one of the major goals of taxonomy. It is hard to predict where phylogenomics as a discipline will stand in even 10 years. Maybe a novel technological revolution will bring it to yet another level? We strongly believe, however, that tree thinking will remain pivotal in the treatment and interpretation of the deluge of genomic data to come. Perhaps a prefiguration of the future of our field is provided by the daily monitoring of the current Covid-19 outbreak via the phylogenetic analysis of coronavirus genomic data in quasi real time – a topic of major societal importance, contemporary to the publication of this book, in which phylogenomics is instrumental in helping to fight disease
    corecore