2,361 research outputs found

    Cut + Paste | An Aesthetic Exploration

    Get PDF
    Photography can provide references for collage compositions. In my work, each step of this transformation resulted in new discoveries moving from one medium to another, and has culminated in dynamic time-lapse videos. This thesis follows my studio habits and derives implications for my classroom practice

    Santa Clara Magazine, Volume 58 Number 1, Spring 2017

    Get PDF
    24 - BIG WIN FOR A TINY HOUSE Turning heads and changing the housing game. By Matt Morgan. 28 - $100 MILLION GIFT TO BUILD John A. ’60 and Susan Sobrato make the largest gift in SCU history. Now see the Sobrato Campus for Discovery and Innovation that will take shape—and redefine the University. Illustration by Tavis Coburn. 36 - CUT & PASTE CONSERVATION We can alter wild species to save them. So should we? By Emma Marris. Illustrations by Jason Holley. 44 - INFO OFFICER IN CHIEF From his office overlooking the White House, Tony Scott J.D. ’92 set out to bring the federal government into the digital age. By Steven Boyd Saum. 48 - FOR THE RECORD Deepwater Horizon. Volkswagen. The Exxon Valdez. Blockbuster cases and the career of John C. Cruden J.D. ’74, civil servant and defender of the environment extraordinaire. By Justin Gerdes. Photography by Robert Clark. 54 - WHERE THERE’S SMOKE … there might just be mirrors. On “fake news,” the Internet, and everyday ethics. By Irina Raicu. Illustrations by Lincoln Agnew.https://scholarcommons.scu.edu/sc_mag/1030/thumbnail.jp

    #cut/paste+bleed: Entangling Feminist Affect, Action and Production On and Offline

    Full text link
    I consider my media praxis project to be labs, encounters, theory-making and scholarly output where doing and thinking in community (often the classroom and its linked spaces) in the sites or technologies under consideration is the “scholarly” product. That is to say, the doing and the process is the product, and what remains can also be shared and/or evaluated, as needed. This sharing of process is what I model now. I describe my most recent project, Ev-Ent-Anglement, engaging again critically with social media networks from inside them, share some of my lessons learned about production and action-based New Media/DH research, and conclude with why I think these methods (as much as my findings) matter

    Scrape, Cut, Paste and Learn: Automated Dataset Generation Applied to Parcel Logistics

    Full text link
    State-of-the-art approaches in computer vision heavily rely on sufficiently large training datasets. For real-world applications, obtaining such a dataset is usually a tedious task. In this paper, we present a fully automated pipeline to generate a synthetic dataset for instance segmentation in four steps. In contrast to existing work, our pipeline covers every step from data acquisition to the final dataset. We first scrape images for the objects of interest from popular image search engines and since we rely only on text-based queries the resulting data comprises a wide variety of images. Hence, image selection is necessary as a second step. This approach of image scraping and selection relaxes the need for a real-world domain-specific dataset that must be either publicly available or created for this purpose. We employ an object-agnostic background removal model and compare three different methods for image selection: Object-agnostic pre-processing, manual image selection and CNN-based image selection. In the third step, we generate random arrangements of the object of interest and distractors on arbitrary backgrounds. Finally, the composition of the images is done by pasting the objects using four different blending methods. We present a case study for our dataset generation approach by considering parcel segmentation. For the evaluation we created a dataset of parcel photos that were annotated automatically. We find that (1) our dataset generation pipeline allows a successful transfer to real test images (Mask AP 86.2), (2) a very accurate image selection process - in contrast to human intuition - is not crucial and a broader category definition can help to bridge the domain gap, (3) the usage of blending methods is beneficial compared to simple copy-and-paste. We made our full code for scraping, image composition and training publicly available at https://a-nau.github.io/parcel2d.Comment: Accepted at ICMLA 202

    SVIM: Structural Variant Identification using Mapped Long Reads

    No full text
    Motivation: Structural variants are defined as genomic variants larger than 50bp. They have been shown to affect more bases in any given genome than SNPs or small indels. Additionally, they have great impact on human phenotype and diversity and have been linked to numerous diseases. Due to their size and association with repeats, they are difficult to detect by shotgun sequencing, especially when based on short reads. Long read, single molecule sequencing technologies like those offered by Pacific Biosciences or Oxford Nanopore Technologies produce reads with a length of several thousand base pairs. Despite the higher error rate and sequencing cost, long read sequencing offers many advantages for the detection of structural variants. Yet, available software tools still do not fully exploit the possibilities. Results: We present SVIM, a tool for the sensitive detection and precise characterization of structural variants from long read data. SVIM consists of three components for the collection, clustering and combination of structural variant signatures from read alignments. It discriminates five different variant classes including similar types, such as tandem and interspersed duplications and novel element insertions. SVIM is unique in its capability of extracting both the genomic origin and destination of duplications. It compares favorably with existing tools in evaluations on simulated data and real datasets from PacBio and Nanopore sequencing machines. Availability and implementation: The source code and executables of SVIM are available on Github: github.com/eldariont/svim. SVIM has been implemented in Python 3 and published on bioconda and the Python Package Index. Supplementary information: Supplementary data are available at Bioinformatics online

    Keeping an eye on the UI design of Translation Memory: How do translators use the 'concordance' feature?

    Get PDF
    Motivation – To investigate the usefulness of sub- segment matching (Concordance feature) in a Translation Memory interface and translators' attitudes to new UI developments around such matching. Research approach – An explorative work-in- progress using eye tracking for translation conducted by professional translators, followed by an opinion survey. Findings/Design – The results suggest that the Concordance window is useful for checking terminology and context, but there is some evidence that the translators do not wish to have this feature turned on constantly. Research limitations/Implications – This is an initial work-in-progress study with a limited number of participants. Quantitative and qualitative results are presented. Originality/Value – This is the first empirical research of its kind. Translators are rarely, if ever, consulted about the UI of the tools they have to use. Take away message - The potential productivity and quality gain from sub-segment matches in Translation Memory is not fully realised and may be enhanced with improved UI design derived from focused research on user experience. Keywords Translation technology, Translation Memory (TM), user interface, sub-segment matching, concordance, eye tracking, user experienc

    Search for Scaling Dimensions for Random Surfaces with c=1

    Full text link
    We study numerically the fractal structure of the intrinsic geometry of random surfaces coupled to matter fields with c=1c=1. Using baby universe surgery it was possible to simulate randomly triangulated surfaces made of 260.000 triangles. Our results are consistent with the theoretical prediction dH=2+2d_H = 2+\sqrt{2} for the intrinsic Hausdorff dimension.Comment: 10 pages, (csh will uudecode and uncompress ps-file), NBI-HE-94-3

    On the benchmarking of ResNet forgery image model using different datasets

    Get PDF
    This paper presents the benchmarking and improve- ment of the ResNet image forgery model using three different datasets (CASIA, Columbia, and LSBU). The model is based on classification, where forgery images have been edited using cut-paste modification technique.The images are categorized to check if the algorithm can successfully identify the difference between the original and the forgery image. All images have been pre-processed with Gray-Edge detectors to obtain get better classification results. Experimental results have shown that the Gray-edge technique has improved the accuracy across all image datasets
    corecore