2,887 research outputs found

    Visualizing networked writing activity

    Get PDF
    In conjunction with the Honors Fellow program and two faculty advisors from both the English and Computer Science departments, another student and I have written software to visualize how participants collaborate on networked writing projects. Using Google Docs as a way to allow students to instantaneously interact with a document in real-time, this software captures data from Google's cloud service and displays it in a pair of visualizations. We used agile methods of software development to devise a way to implement their ideas in an appealing way. This document contains detailed instructions on where the latest iteration of the software can be located. It also details the process of making the system operational on a new machine, stating how the software works and where it should be placed in the file system. The document also explains how one can use the system to visualize writing collaboration. Finally, many failed iterations of the software have led to meaningful reflections on software development practices. The document serves as a technical report for the software, but also elaborates on the hardships of development, as well as provides insight on how this software may evolve toward richer experiences. Also included is an Author's Statement which reveals many of the learning experiences that arose throughout the development of this project.Honors CollegeThesis (B.?.

    Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    Get PDF
    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into OpenCL-accelerated programs with parallelized kernels. The main contributions of our work are: (1) whole-source refactoring to allow any subroutine in the code to be offloaded to an accelerator. (2) Minimization of the data transfer between the host and the accelerator by eliminating redundant transfers. (3) Pragmatic auto-parallelization of the code to be offloaded to the accelerator by identification of parallelizable maps and reductions. We have validated the code transformation performance of the compiler on the NIST FORTRAN 78 test suite and several real-world codes: the Large Eddy Simulator for Urban Flows, a high-resolution turbulent flow model; the shallow water component of the ocean model Gmodel; the Linear Baroclinic Model, an atmospheric climate model and Flexpart-WRF, a particle dispersion simulator. The automatic parallelization component has been tested on as 2-D Shallow Water model (2DSW) and on the Large Eddy Simulator for Urban Flows (UFLES) and produces a complete OpenCL-enabled code base. The fully OpenCL-accelerated versions of the 2DSW and the UFLES are resp. 9x and 20x faster on GPU than the original code on CPU, in both cases this is the same performance as manually ported code.Comment: 12 pages, 5 figures, submitted to "Computers and Fluids" as full paper from ParCFD conference entr

    Revising with a Backward Glance: Regressions and Skips during Reading as Cognitive Signals for Revision Policies in Incremental Processing

    Full text link
    In NLP, incremental processors produce output in instalments, based on incoming prefixes of the linguistic input. Some tokens trigger revisions, causing edits to the output hypothesis, but little is known about why models revise when they revise. A policy that detects the time steps where revisions should happen can improve efficiency. Still, retrieving a suitable signal to train a revision policy is an open problem, since it is not naturally available in datasets. In this work, we investigate the appropriateness of regressions and skips in human reading eye-tracking data as signals to inform revision policies in incremental sequence labelling. Using generalised mixed-effects models, we find that the probability of regressions and skips by humans can potentially serve as useful predictors for revisions in BiLSTMs and Transformer models, with consistent results for various languages.Comment: Accepted to CoNLL 202

    Modeling, scaling and sequencing writing phases of Swiss television journalists

    Get PDF
    Writing phases – defined as identifiable temporal procedural units with typical dominant writing actions such as formulating or source reading – have long been conceived fundamental for the success of writing processes. However, the methodology for an objectively verifiable analysis of the nature and interplay of writing phases has not yet been developed. Also, most of the current scientific concepts of writing phases are based on introspection, single case studies or experimental research designs. This thesis drew on one of the most extensive data collections of writing processes in natural settings: Over 120 multimodal writing processes of Swiss television journalists were recorded, annotated and merged into one dataset. Since the data was collected in an ethnographic research framework, writing activities such as insertions or deletions could be related to background conditions such as the writing environment, the writing task and the experience of the writers. In a first methodological step, the writing processes were coded qualitatively, and writing phases on different scales and timeframes were identified. Based on the time series format of the data, statistical models of scalable writing phases were developed in a second step, which enabled automated detection of writing phases in the corpus in a third step. In a fourth step, the effect of sequences of writing phases on writing processes and products was investigated. As a result, phases and their sequence in natural writing processes were described and explained, which contributes to both theoretical and practical endeavors of applied linguistics. From a theoretical perspective, the concept of the writing phase and its relation to writing practices were clarified and refined on a strong empirical basis. From a practical perspective, the thesis provides tools for the process oriented, domain specific teaching of writing

    Rethinking Generation - Skipping Transfers

    Get PDF

    Real-time reconfiguration of programmable logic controller communication paths

    Get PDF
    This thesis explores the topics related to reconfiguration of Programmable Logic Controller\u27s (PLC\u27s) communications paths as it relates to network security and reliability. These paths are normally fixed, which creates a single fault point which can easily be disrupted by network failure or network based attack. With the ability for autonomous communications path reconfiguration these disruptions in communications can be avoided or bypassed. This work builds on these principles and a series of PLC programs are developed to facilitate several things: Scanning of the three different network types most common in PLC to PLC communications; a comprehensive network scan routine for locating multiple communications paths to available network enabled modules and devices; add-on functions for verifying and using these found communications paths; and MS Excel macros for documenting the found modules and devices along with their communications paths from the host processor --Abstract, page iii

    Who Killed the Rule Against Perpetuities?

    Get PDF
    During the last two decades more than half the states have either abolished or substantially weakened the traditional rule against perpetuities. The increased demand for perpetual trusts is widely attributed to the ability of such trusts to avoid federal wealth transfer taxes. Furthermore, recent empirical studies confirm a correlation between repeal of the rule against perpetuities (coupled with favorable state income tax treatment) and increased personal trust assets and average account size. This symposium article discusses the asymmetric benefits and drawbacks of perpetual trusts and concludes that the decline of the rule against perpetuities cannot be explained solely in terms of rational tax planning

    SMIX: Self-managing indexes for dynamic workloads

    Get PDF
    As databases accumulate growing amounts of data at an increasing rate, adaptive indexing becomes more and more important. At the same time, applications and their use get more agile and flexible, resulting in less steady and less predictable workload characteristics. Being inert and coarse-grained, state-of-the-art index tuning techniques become less useful in such environments. Especially the full-column indexing paradigm results in many indexed but never queried records and prohibitively high storage and maintenance costs. In this paper, we present Self-Managing Indexes, a novel, adaptive, fine-grained, autonomous indexing infrastructure. In its core, our approach builds on a novel access path that automatically collects useful index information, discards useless index information, and competes with its kind for resources to host its index information. Compared to existing technologies for adaptive indexing, we are able to dynamically grow and shrink our indexes, instead of incrementally enhancing the index granularity

    Revamping the Right to Be Informed: Protecting Consumers Under New Jersey\u27s Truth-In-Consumer Contract, Warranty, and Notice Act*

    Get PDF
    Prior to the 1960s, “courts were notorious for their insensitivity to consumer interests, while legislatures did little in the way of offering the consumer comprehensive protection against business fraud.”1 However, the tide of legislation began to turn in the 1960s as a movement for greater consumer protections finally reached the ears of an individual with a powerful voice: President John F. Kennedy
    • …
    corecore