243,003 research outputs found

    Measuring the Pro-Activity of Software Agents

    Get PDF
    Despite having well-defined characteristics, software agents do not have a developed set of measures defining their quality. Attempts at evaluating software agent quality have focused on some agent aspects, like the development process, whereas others focusing on the agent as a software product have basically adopted measures associated with other software paradigms, like procedural and object-oriented concepts. Here we propose a set of measures for evaluating software agent pro-activity, the software agent's goal-driven behavioral ability to take the initiative and satisfy its goal

    Simulation modelling and visualisation: toolkits for building artificial worlds

    Get PDF
    Simulations users at all levels make heavy use of compute resources to drive computational simulations for greatly varying applications areas of research using different simulation paradigms. Simulations are implemented in many software forms, ranging from highly standardised and general models that run in proprietary software packages to ad hoc hand-crafted simulations codes for very specific applications. Visualisation of the workings or results of a simulation is another highly valuable capability for simulation developers and practitioners. There are many different software libraries and methods available for creating a visualisation layer for simulations, and it is often a difficult and time-consuming process to assemble a toolkit of these libraries and other resources that best suits a particular simulation model. We present here a break-down of the main simulation paradigms, and discuss differing toolkits and approaches that different researchers have taken to tackle coupled simulation and visualisation in each paradigm

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    Paradigms of Music Software Development

    Get PDF
    On the way to a more comprehensive and integrative historiography of music software, this paper proposes a survey of the main paradigms of music software development from the 1950s to the present. Concentrating on applications for music composition, production and performance, the analysis focusses on the concept and design of the human-computer-interaction as well as the implicit user

    Potential Errors and Test Assessment in Software Product Line Engineering

    Full text link
    Software product lines (SPL) are a method for the development of variant-rich software systems. Compared to non-variable systems, testing SPLs is extensive due to an increasingly amount of possible products. Different approaches exist for testing SPLs, but there is less research for assessing the quality of these tests by means of error detection capability. Such test assessment is based on error injection into correct version of the system under test. However to our knowledge, potential errors in SPL engineering have never been systematically identified before. This article presents an overview over existing paradigms for specifying software product lines and the errors that can occur during the respective specification processes. For assessment of test quality, we leverage mutation testing techniques to SPL engineering and implement the identified errors as mutation operators. This allows us to run existing tests against defective products for the purpose of test assessment. From the results, we draw conclusions about the error-proneness of the surveyed SPL design paradigms and how quality of SPL tests can be improved.Comment: In Proceedings MBT 2015, arXiv:1504.0192

    Perspectives about paradigms in software engineering

    Get PDF
    There is a broad use of the term “paradigm” in Software Engineer-ing. Concepts such as structured paradigm, cascade paradigm or agent-oriented paradigm are very frequent in software engineering research proposals. In this essay we distinguish between functional and scientific paradigm and we show that the common use of paradigm in Software Engineering is about the func-tional or engineering paradigm rather than scientific paradigm. We distinguish among four possible perspectives and, in this context, we sustain that the scien-tific perspective is intrinsic and hence very difficult to properly identify and de-scribe. We argue that a discussion about the scientific paradigm in Software Engineering could help us to evaluate and improve the research practice in the discipline.Peer ReviewedPostprint (published version

    Information technologies for astrophysics circa 2001

    Get PDF
    It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include mineaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large datasets. Three limiting paradigms are saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage and retrieval off the shelf; and the linear mode of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade
    • …
    corecore