62 research outputs found

    PyPlutchik: Visualising and comparing emotion-annotated corpora

    Get PDF
    The increasing availability of textual corpora and data fetched from social networks is fuelling a huge production of works based on the model proposed by psychologist Robert Plutchik, often referred simply as the “Plutchik Wheel”. Related researches range from annotation tasks description to emotions detection tools. Visualisation of such emotions is traditionally carried out using the most popular layouts, as bar plots or tables, which are however sub-optimal. The classic representation of the Plutchik’s wheel follows the principles of proximity and opposition between pairs of emotions: spatial proximity in this model is also a semantic proximity, as adjacent emotions elicit a complex emotion (a primary dyad) when triggered together; spatial opposition is a semantic opposition as well, as positive emotions are opposite to negative emotions. The most common layouts fail to preserve both features, not to mention the need of visually allowing comparisons between different corpora in a blink of an eye, that is hard with basic design solutions. We introduce PyPlutchik the Pyplutchik package is available as a Github repository (http://github.com/alfonsosemeraro/pyplutchik) or through the installation commands pip or conda. For any enquiry about usage or installation feel free to contact the corresponding author, a Python module specifically designed for the visualisation of Plutchik’s emotions in texts or in corpora. PyPlutchik draws the Plutchik’s flower with each emotion petal sized after how much that emotion is detected or annotated in the corpus, also representing three degrees of intensity for each of them. Notably, PyPlutchik allows users to display also primary, secondary, tertiary and opposite dyads in a compact, intuitive way. We substantiate our claim that PyPlutchik outperforms other classic visualisations when displaying Plutchik emotions and we showcase a few examples that display our module’s most compelling features

    Real-Time deep image rendering and order independent transparency

    Get PDF
    In computer graphics some operations can be performed in either object space or image space. Image space computation can be advantageous, especially with the high parallelism of GPUs, improving speed, accuracy and ease of implementation. For many image space techniques the information contained in regular 2D images is limiting. Recent graphics hardware features, namely atomic operations and dynamic memory location writes, now make it possible to capture and store all per-pixel fragment data from the rasterizer in a single pass in what we call a deep image. A deep image provides a state where all fragments are available and gives a more complete image based geometry representation, providing new possibilities in image based rendering techniques. This thesis investigates deep images and their growing use in real-time image space applications. A focus is new techniques for improving fundamental operation performance, including construction, storage, fast fragment sorting and sampling. A core and driving application is order-independent transparency (OIT). A number of deep image sorting improvements are presented, through which an order of magnitude performance increase is achieved, significantly advancing the ability to perform transparency rendering in real time. In the broader context of image based rendering we look at deep images as a discretized 3D geometry representation and discuss sampling techniques for raycasting and antialiasing with an implicit fragment connectivity approach. Using these ideas a more computationally complex application is investigated — image based depth of field (DoF). Deep images are used to provide partial occlusion, and in particular a form of deep image mipmapping allows a fast approximate defocus blur of up to full screen size

    Bioimage Data Analysis Workflows ‒ Advanced Components and Methods

    Get PDF
    This open access textbook aims at providing detailed explanations on how to design and construct image analysis workflows to successfully conduct bioimage analysis. Addressing the main challenges in image data analysis, where acquisition by powerful imaging devices results in very large amounts of collected image data, the book discusses techniques relying on batch and GPU programming, as well as on powerful deep learning-based algorithms. In addition, downstream data processing techniques are introduced, such as Python libraries for data organization, plotting, and visualizations. Finally, by studying the way individual unique ideas are implemented in the workflows, readers are carefully guided through how the parameters driving biological systems are revealed by analyzing image data. These studies include segmentation of plant tissue epidermis, analysis of the spatial pattern of the eye development in fruit flies, and the analysis of collective cell migration dynamics. The presented content extends the Bioimage Data Analysis Workflows textbook (Miura, Sladoje, 2020), published in this same series, with new contributions and advanced material, while preserving the well-appreciated pedagogical approach adopted and promoted during the training schools for bioimage analysis organized within NEUBIAS – the Network of European Bioimage Analysts. This textbook is intended for advanced students in various fields of the life sciences and biomedicine, as well as staff scientists and faculty members who conduct regular quantitative analyses of microscopy images

    VRCodes : embedding unobtrusive data for new devices in visible light

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 97-101).This thesis envisions a public space populated with active visible surfaces which appear different to a camera than to the human eye. Thus, they can act as general digital interfaces that transmit machine-compatible data as well as provide relative orientation without being obtrusive. We introduce a personal transceiver peripheral, and demonstrate this visual environment enables human participants to hear sound only from the location they are looking in, authenticate with proximal surfaces, and gather otherwise imperceptible data from an object in sight. We present a design methodology that assumes the availability of many independent and controllable light transmitters where each individual transmitter produces light at different color wavelengths. Today, controllable light transmitters take the form of digital billboards, signage and overhead lighting built for human use; light-capturing receivers take the form of mobile cameras and personal video camcorders. Following the software-defined approach, we leverage screens and cameras as parameterized hardware peripherals thus allowing flexibility and development of the proposed framework on general-purpose computers in a manner that is unobtrusive to humans. We develop VRCodes which display spatio-temporally modulated metamers on active screens thus conveying digital and positional information to a rolling-shutter camera; and physically-modified optical setups which encode data in a point-spread function thus exploiting the camera's wide-aperture. These techniques exploit how the camera sees something different from the human. We quantify the full potential of the system by characterizing basic bounds of a parameterized transceiver hardware along with the medium in which it operates. Evaluating performance highlights the underutilized temporal, spatial and frequency dimensions available to the interaction designer concerned with human perception. Results suggest that the one-way point-to-point transmission is good enough for extending the techniques toward a two-way bidrectional model with realizable hardware devices. The new visual environment contains a second data layer for machines that is synthetic and quantifiable; human interactions serve as the context.by Grace Woo.Ph.D

    Bioimage Data Analysis Workflows ‒ Advanced Components and Methods

    Get PDF
    This open access textbook aims at providing detailed explanations on how to design and construct image analysis workflows to successfully conduct bioimage analysis. Addressing the main challenges in image data analysis, where acquisition by powerful imaging devices results in very large amounts of collected image data, the book discusses techniques relying on batch and GPU programming, as well as on powerful deep learning-based algorithms. In addition, downstream data processing techniques are introduced, such as Python libraries for data organization, plotting, and visualizations. Finally, by studying the way individual unique ideas are implemented in the workflows, readers are carefully guided through how the parameters driving biological systems are revealed by analyzing image data. These studies include segmentation of plant tissue epidermis, analysis of the spatial pattern of the eye development in fruit flies, and the analysis of collective cell migration dynamics. The presented content extends the Bioimage Data Analysis Workflows textbook (Miura, Sladoje, 2020), published in this same series, with new contributions and advanced material, while preserving the well-appreciated pedagogical approach adopted and promoted during the training schools for bioimage analysis organized within NEUBIAS – the Network of European Bioimage Analysts. This textbook is intended for advanced students in various fields of the life sciences and biomedicine, as well as staff scientists and faculty members who conduct regular quantitative analyses of microscopy images

    Exploring the optical perception of image within glass

    Get PDF
    Within the contemporary world, 3D film and television imagery is at the cutting edge of visual technology, but for centuries we have been captivated by the creation of visual illusions/allusions that play with our perception of the world, from the auto-stereoscopic barrier methods pioneered in the late 17th century by the French painter G. A. Bois-Clair to the ‘Op’ art movement of the 1960s and, more recently, Patrick Hughes’ ‘reverse perspective’ paintings. By building on these new and old technologies I have extended my own practice, which engages with the 2D image as a 3D allusion/illusion in glass, by examining how this type of image can be created and perceived within glass. I have explored theories of optical perception in connection with the binocular recognition of depth and space, as well as kinetic clues to distance through motion parallax monitoring and assumptions about default linear perspective, light and inference within our personal schemata. - ‘Optical illusion’ is used to mean an instance of a wrong or misinterpreted perception of a sensory experience; the distortion of senses revealing how the brain organises and interprets visual information; an individual’s ability to perceive depth, 3D form and motion. - ‘Allusion’ is used to imply a symbolic or covert reference. My practical research focuses on the perceived creation of the 3D image within glass and explores the notion of glass as a facilitator in working with and challenging the themes of 3D image perception. I have particularly addressed artistic spatial illusionary methods, reverse perspective techniques, auto-stereoscopic image-based systems, parallax stereograms and lenticular print and lens technology. Through building on my previous practice of working with multiple-layered images within cast glass, combined with more complex and scientific optical methods, I have explored the perception of the image by working with new and old 3D technologies in order to produce a body of work which examines this perception within glass. During my research I have developed an original casting process, a vacuum-casting lost wax process for glass, in addition to producing an accurate industry standard lenticular glass lens. This research intends to provide a theoretical basis for new glass working techniques, both within the glass artist’s studio and in the commercial world of print, towards applications within architectural design, installation art and image-based artwork in general. This thesis is therefore a summation of the research that I have undertaken over the past six years and an attempt to give substance to the ideas and references that have preoccupied my own investigations over that period. I have structured the thesis into three themes: perspective; perception; and process but those three elements were never separate from each other and not only do they depend on each other, their purpose is, in some way, to combine in the creation of my finished pieces

    3D Organization of Eukaryotic and Prokaryotic Genomes

    Get PDF
    There is a complex mutual interplay between three-dimensional (3D) genome organization and cellular activities in bacteria and eukaryotes. The aim of this thesis is to investigate such structure-function relationships. A main part of this thesis deals with the study of the three-dimensional genome organization using novel techniques for detecting genome-wide contacts using next-generation sequencing. These so called chromatin conformation capture-based methods, such as 5C and Hi-C, give deep insights into the architecture of the genome inside the nucleus, even on a small scale. We shed light on the question how the vastly increasing Hi-C data can generate new insights about the way the genome is organized in 3D. To this end, we first present the typical Hi-C data processing workflow to obtain Hi-C contact maps and show potential pitfalls in the interpretation of such contact maps using our own data pipeline and publicly available Hi-C data sets. Subsequently, we focus on approaches to modeling 3D genome organization based on contact maps. In this context, a computational tool was developed which interactively visualizes contact maps alongside complementary genomic data tracks. Inspired by machine learning with the help of probabilistic graphical models, we developed a tool that detects the compartmentalization structure within contact maps on multiple scales. In a further project, we propose and test one possible mechanism for the observed compartmentalization within contact maps of genomes across multiple species: Dynamic formation of loops within domains. In the context of 3D organization of bacterial chromosomes, we present the first direct evidence for global restructuring by long-range interactions of a DNA binding protein. Using Hi-C and live cell imaging of DNA loci, we show that the DNA binding protein Rok forms insulator-like complexes looping the B. subtilis genome over large distances. This biological mechanism agrees with our model based on dynamic formation of loops affecting domain formation in eukaryotic genomes. We further investigate the spatial segregation of the E. coli chromosome during cell division. In particular, we are interested in the positioning of the chromosomal replication origin region based on its interaction with the protein complex MukBEF. We tackle the problem using a combined approach of stochastic and polymer simulations. Last but not least, we develop a completely new methodology to analyze single molecule localization microscopy images based on topological data analysis. By using this new approach in the analysis of irradiated cells, we are able to show that the topology of repair foci can be categorized depending the distance to heterochromatin

    Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems

    Get PDF
    There has been great interest in researching and implementing effective technologies for the capture, processing, and display of 3D images. This broad interest is evidenced by widespread international research and activities on 3D technologies. There is a large number of journal and conference papers on 3D systems, as well as research and development efforts in government, industry, and academia on this topic for broad applications including entertainment, manufacturing, security and defense, and biomedical applications. Among these technologies, integral imaging is a promising approach for its ability to work with polychromatic scenes and under incoherent or ambient light for scenarios from macroscales to microscales. Integral imaging systems and their variations, also known as plenoptics or light-field systems, are applicable in many fields, and they have been reported in many applications, such as entertainment (TV, video, movies), industrial inspection, security and defense, and biomedical imaging and displays. This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy

    Simulação de acomodação e aberrações de baixa ordem do olho humano usando árvores de coleta de luz

    Get PDF
    In this work, we present two practical solutions for simulating accommodation and loworder aberrations of optical systems, such as the human eye. Taking into account pupil size (aperture) and accommodation (focal distance), our approaches model the corresponding point spread function and produce realistic depth-dependent simulations of low-order visual aberrations (e.g., myopia, hyperopia, and astigmatism). In the first solution, we use wave optics to extend the notion of Depth Point Spread Function, which originally relies on ray tracing, to perform the generation of point spread functions using Fourier optics. In the other technique, we use geometric optics to build a light-gathering tree data structure, presenting a solution to the problem of artifacts caused by absence of occluded pixels in the input discretized depth images. As such, the resulting images show seamless transitions among elements at different scene depths. We demonstrate the effectiveness of our approaches through a series of quantitative and qualitative experiments on images with depth obtained from real environments. Our results achieved SSIM values above 0.94 and PSNR above 32.0 in all objective evaluations, indicating strong agreement with the ground-truth.Neste trabalho, apresentamos duas técnicas de simulação de acomodação e aberrações de baixa ordem de sistemas ópticos, tais como o olho humano. Nossos algoritmos lançam mão de determinadas informações, tais como o tamanho da pupila e a acomodação (distância focal), com o objetivo de modelar a função de espalhamento pontual (point spread function) do sistema, resultando na produção de simulações realistas de aberrações de baixa ordem (p.e., miopia, hipermetropia e astigmatismo). Nossas simulações levam também em consideração as distâncias dos objetos que compõem a cena a fim de aplicar o borramento apropriado. A primeira técnica estende o conceito de Função de Espalhamento Pontual com Profundidade (Depth Point Spread Function), originalmente construída mediante o traçado de raios (ray tracing), que passa então a ser gerada por meio de métodos da óptica de Fourier. A segunda técnica, por sua vez, utiliza-se da óptica geométrica para construir uma estrutura de dados em forma de árvore. Esta árvore é então utilizada para simular a propagação da luz no ambiente, gerando os efeitos de borramento esperados, e de quebra soluciona o problema de artefatos visuais causados pela ausência de informação na imagem original (provocada pela oclusão parcial entre elementos da cena). Nós demonstramos a efetividade de nossos algoritmos por meio de uma série de experimentos quantitativos e qualitativos em imagens com profundidade obtidas de ambientes reais. Nossos resultados alcançaram valores de SSIM superiores a 0,94 e valores de PSNR superiores a 32,0 em todas as avaliações objetivas, o que indica uma expressiva concordância com as imagens de referência
    corecore