3,862 research outputs found

    Analyzing interfaces and workflows for light field editing

    Get PDF
    With the increasing number of available consumer light field cameras, such as Lytro, Raytrix, or Pelican Imaging, this new form of photography is progressively becoming more common. However, there are still very few tools for light field editing, and the interfaces to create those edits remain largely unexplored. Given the extended dimensionality of light field data, it is not clear what the most intuitive interfaces and optimal workflows are, in contrast with well-studied two-dimensional (2-D) image manipulation software. In this work, we provide a detailed description of subjects' performance and preferences for a number of simple editing tasks, which form the basis for more complex operations. We perform a detailed state sequence analysis and hidden Markov chain analysis based on the sequence of tools and interaction paradigms users employ while editing light fields. These insights can aid researchers and designers in creating new light field editing tools and interfaces, thus helping to close the gap between 4-D and 2-D image editing

    FAST: FAST Analysis of Sequences Toolbox.

    Get PDF
    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought

    Community-developed checklists for publishing images and image analysis

    Get PDF
    Images document scientific discoveries and are prevalent in modern biomedical research. Microscopy imaging in particular is currently undergoing rapid technological advancements. However for scientists wishing to publish the obtained images and image analyses results, there are to date no unified guidelines. Consequently, microscopy images and image data in publications may be unclear or difficult to interpret. Here we present community-developed checklists for preparing light microscopy images and image analysis for publications. These checklists offer authors, readers, and publishers key recommendations for image formatting and annotation, color selection, data availability, and for reporting image analysis workflows. The goal of our guidelines is to increase the clarity and reproducibility of image figures and thereby heighten the quality of microscopy data is in publications.Comment: 28 pages, 8 Figures, 3 Supplmentary Figures, Manuscript, Essential recommendations for publication of microscopy image dat

    SaDA: From Sampling to Data Analysis—An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data

    Get PDF
    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies

    TERMS: Techniques for electronic resources management

    Get PDF
    Librarians and information specialists have been finding ways to manage electronic resources for over a decade now. However, much of this work has been an ad hoc and learn-as-you-go process. The literature on electronic resource management shows this work as being segmented into many different areas of traditional librarian roles within the library. In addition, the literature show how management of these resources has driven the development of various management tools in the market as well as serve as the greatest need in the development of next generation library systems. TERMS is an attempt to create a series of on-going and continually developing set of management best practices for electronic resource management in libraries

    Evaluation and improvement of the workflow of digital imaging of fine art reproduction in museums

    Get PDF
    Fine arts refer to a broad spectrum of art formats, ie~painting, calligraphy, photography, architecture, and so forth. Fine art reproductions are to create surrogates of the original artwork that are able to faithfully deliver the aesthetics and feelings of the original. Traditionally, reproductions of fine art are made in the form of catalogs, postcards or books by museums, libraries, archives, and so on (hereafter called museums for simplicity). With the widespread adoption of digital archiving in museums, more and more artwork is reproduced to be viewed on a display. For example, artwork collections are made available through museum websites and Google Art Project for art lovers to view on their own displays. In the thesis, we study the fine art reproduction of paintings in the form of soft copy viewed on displays by answering four questions: (1) what is the impact of the viewing condition and original on image quality evaluation? (2) can image quality be improved by avoiding visual editing in current workflows of fine art reproduction? (3) can lightweight spectral imaging be used for fine art reproduction? and (4) what is the performance of spectral reproductions compared with reproductions by current workflows? We started with evaluating the perceived image quality of fine art reproduction created by representative museums in the United States under controlled and uncontrolled environments with and without the presence of the original artwork. The experimental results suggest that the image quality is highly correlated with the color accuracy of the reproduction only when the original is present and the reproduction is evaluated on a characterized display. We then examined the workflows to create these reproductions, and found that current workflows rely heavily on visual editing and retouching (global and local color adjustments on the digital reproduction) to improve the color accuracy of the reproduction. Visual editing and retouching can be both time-consuming and subjective in nature (depending on experts\u27 own experience and understanding of the artwork) lowering the efficiency of artwork digitization considerably. We therefore propose to improve the workflow of fine art reproduction by (1) automating the process of visual editing and retouching in current workflows based on RGB acquisition systems and by (2) recovering the spectral reflectance of the painting with off-the-shelf equipment under commonly available lighting conditions. Finally, we studied the perceived image quality of reproductions created by current three-channel (RGB) workflows with those by spectral imaging and those based on an exemplar-based method

    Analyzing and Developing Aspects of the Artist Pipeline for Clemson University Art

    Get PDF
    Major digital production facilities such as Sony Pictures Imageworks, Pixar Animation studio, Walt Disney Animation Studio, and Epic Games use a production system called a pipeline. The term “pipeline” refers to the structure and process of data flow between the various phases of production from story to final edit. This paper examines current production pipeline practices in the Digital Production Arts program at Clemson University and proposes updates and modifications to the workflow. Additionally, this thesis suggests tools that are intended to improve the pipeline with artist-friendly interfaces and customizable integration between software and remote-production capabilities
    • …
    corecore