19,923 research outputs found

    Creativity and Culture in Copyright Theory

    Get PDF
    Creativity is universally agreed to be a good that copyright law should seek to promote, yet copyright scholarship and policymaking have proceeded largely on the basis of assumptions about what it actually is. When asked to discuss the source of their inspiration, individual artists describe a process that is intrinsically ineffable. Rights theorists of all varieties have generally subscribed to this understanding, describing creativity in terms of an individual liberty whose form remains largely unspecified. Economic theorists of copyright work from the opposite end of the creative process, seeking to divine the optimal rules for promoting creativity by measuring its marketable byproducts. But these theorists offer no particular reason to think that marketable byproducts are either an appropriate proxy or an effective stimulus for creativity (as opposed to production), and more typically refuse to engage the question. The upshot is that the more we talk about creativity, the more it disappears from view. At the same time, the mainstream of intellectual property scholarship has persistently overlooked a broad array of social science methodologies that provide both descriptive tools for constructing ethnographies of creative processes and theoretical tools for modeling them

    A decision-theoretic approach to the display of information for time-critical decisions: The Vista project

    Get PDF
    We describe a collaborative research and development effort between the Palo Alto Laboratory of the Rockwell Science Center, Rockwell Space Operations Company, and the Propulsion Systems Section of NASA JSC to design computational tools that can manage the complexity of information displayed to human operators in high-stakes, time-critical decision contexts. We shall review an application from NASA Mission Control and describe how we integrated a probabilistic diagnostic model and a time-dependent utility model, with techniques for managing the complexity of computer displays. Then, we shall describe the behavior of VPROP, a system constructed to demonstrate promising display-management techniques. Finally, we shall describe our current research directions on the Vista 2 follow-on project

    Knowledge data discovery and data mining in a design environment

    Get PDF
    Designers, in the process of satisfying design requirements, generally encounter difficulties in, firstly, understanding the problem and secondly, finding a solution [Cross 1998]. Often the process of understanding the problem and developing a feasible solution are developed simultaneously by proposing a solution to gauge the extent to which the solution satisfies the specific requirements. Support for future design activities has long been recognised to exist in the form of past design cases, however the varying degrees of similarity and dissimilarity found between previous and current design requirements and solutions has restrained the effectiveness of utilising past design solutions. The knowledge embedded within past designs provides a source of experience with the potential to be utilised in future developments provided that the ability to structure and manipulate that knowledgecan be made a reality. The importance of providing the ability to manipulate past design knowledge, allows the ranging viewpoints experienced by a designer, during a design process, to be reflected and supported. Data Mining systems are gaining acceptance in several domains but to date remain largely unrecognised in terms of the potential to support design activities. It is the focus of this paper to introduce the functionality possessed within the realm of Data Mining tools, and to evaluate the level of support that may be achieved in manipulating and utilising experiential knowledge to satisfy designers' ranging perspectives throughout a product's development

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Javaā€™s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the classā€™s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    Retrenching the Purse: Finite Exception Logs, and Validating the Small

    No full text
    The Mondex Electronic Purse is an outstanding example of industrial scale formal refinement, and was the first verification to achieve ITSEC level E6 certification. A formal abstract model and a formal concrete model were developed, and a formal refinement was hand-proved between them. Nevertheless, certain requirements issues were set beyond the scope of the formal development, or handled in an unnatural manner. The retrenchment Tower Pattern is used to address one such issue in detail: the finiteness of the purse log (which records unsuccessful transactions). A retrenchment is constructed from the lowest level model of the purse system to a model in which logs are finite, and is then lifted to create two refinement developments of the purse, working at different levels of detail, and connected via retrenchments, forming the tower. The tower development is appropriately validated, vindicating the design used

    A Massive Data Parallel Computational Framework for Petascale/Exascale Hybrid Computer Systems

    Full text link
    Heterogeneous systems are becoming more common on High Performance Computing (HPC) systems. Even using tools like CUDA and OpenCL it is a non-trivial task to obtain optimal performance on the GPU. Approaches to simplifying this task include Merge (a library based framework for heterogeneous multi-core systems), Zippy (a framework for parallel execution of codes on multiple GPUs), BSGP (a new programming language for general purpose computation on the GPU) and CUDA-lite (an enhancement to CUDA that transforms code based on annotations). In addition, efforts are underway to improve compiler tools for automatic parallelization and optimization of affine loop nests for GPUs and for automatic translation of OpenMP parallelized codes to CUDA. In this paper we present an alternative approach: a new computational framework for the development of massively data parallel scientific codes applications suitable for use on such petascale/exascale hybrid systems built upon the highly scalable Cactus framework. As the first non-trivial demonstration of its usefulness, we successfully developed a new 3D CFD code that achieves improved performance.Comment: Parallel Computing 2011 (ParCo2011), 30 August -- 2 September 2011, Ghent, Belgiu

    Advanced Cyberinfrastructure for Science, Engineering, and Public Policy

    Full text link
    Progress in many domains increasingly benefits from our ability to view the systems through a computational lens, i.e., using computational abstractions of the domains; and our ability to acquire, share, integrate, and analyze disparate types of data. These advances would not be possible without the advanced data and computational cyberinfrastructure and tools for data capture, integration, analysis, modeling, and simulation. However, despite, and perhaps because of, advances in "big data" technologies for data acquisition, management and analytics, the other largely manual, and labor-intensive aspects of the decision making process, e.g., formulating questions, designing studies, organizing, curating, connecting, correlating and integrating crossdomain data, drawing inferences and interpreting results, have become the rate-limiting steps to progress. Advancing the capability and capacity for evidence-based improvements in science, engineering, and public policy requires support for (1) computational abstractions of the relevant domains coupled with computational methods and tools for their analysis, synthesis, simulation, visualization, sharing, and integration; (2) cognitive tools that leverage and extend the reach of human intellect, and partner with humans on all aspects of the activity; (3) nimble and trustworthy data cyber-infrastructures that connect, manage a variety of instruments, multiple interrelated data types and associated metadata, data representations, processes, protocols and workflows; and enforce applicable security and data access and use policies; and (4) organizational and social structures and processes for collaborative and coordinated activity across disciplinary and institutional boundaries.Comment: A Computing Community Consortium (CCC) white paper, 9 pages. arXiv admin note: text overlap with arXiv:1604.0200

    mRUBiS: An Exemplar for Model-Based Architectural Self-Healing and Self-Optimization

    Full text link
    Self-adaptive software systems are often structured into an adaptation engine that manages an adaptable software by operating on a runtime model that represents the architecture of the software (model-based architectural self-adaptation). Despite the popularity of such approaches, existing exemplars provide application programming interfaces but no runtime model to develop adaptation engines. Consequently, there does not exist any exemplar that supports developing, evaluating, and comparing model-based self-adaptation off the shelf. Therefore, we present mRUBiS, an extensible exemplar for model-based architectural self-healing and self-optimization. mRUBiS simulates the adaptable software and therefore provides and maintains an architectural runtime model of the software, which can be directly used by adaptation engines to realize and perform self-adaptation. Particularly, mRUBiS supports injecting issues into the model, which should be handled by self-adaptation, and validating the model to assess the self-adaptation. Finally, mRUBiS allows developers to explore variants of adaptation engines (e.g., event-driven self-adaptation) and to evaluate the effectiveness, efficiency, and scalability of the engines
    • ā€¦
    corecore