9 research outputs found

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    Infrastructure-as-a-Service Usage Determinants in Enterprises

    Get PDF
    The thesis focuses on the research question, what the determinants of Infrastructure-as-a-Service usage of enterprises are. A wide range of IaaS determinants is collected for an IaaS adoption model of enterprises, which is evaluated in a Web survey. As the economical determinants are especially important, they are separately investigated using a cost-optimizing decision support model. This decision support model is then applied to a potential IaaS use case of a large automobile manufacturer

    On the Efficient Design and Testing of Dependable Systems Software

    Get PDF
    Modern computing systems that enable increasingly smart and complex applications permeate our daily lives. We strive for a fully connected and automated world to simplify our lives and increase comfort by offloading tasks to smart devices and systems. We have become dependent on the complex and ever growing ecosystem of software that drives the innovations of our smart technologies. With this dependence on complex software systems arises the question whether these systems are dependable, i.e., whether we can actually trust them to perform their intended functions. As software is developed by human beings, it must be expected to contain faults, and we need strategies and techniques to minimize both their number and the severity of their impact that scale with the increase in software complexity. Common approaches to achieve dependable operation include fault acceptance and fault avoidance strategies. The former gracefully handle faults when they occur during operation, e.g., by isolating and restarting faulty components, whereas the latter try to remove faults before system deployment, e.g., by applying correctness testing and software fault injection (SFI) techniques. On this background, this thesis aims at improving the efficiency of fault isolation for operating system kernel components, which are especially critical for dependable operation, as well as at improving the efficiency of dynamic testing activities to cope with the increasing complexity of software. Using the widely used Linux kernel, we demonstrate that partial fault isolation techniques for kernel software components can be enhanced with dynamic runtime profiles to strike a balance between the expected overheads imposed by the isolation mechanism and the achieved degree of isolation according to user requirements. With the increase in software complexity, comprehensive correctness and robustness assessments using testing and SFI require a substantially increasing number of individual tests whose execution requires a considerable amount of time. We study, considering different levels of the software stack, if modern parallel hardware can be employed to mitigate this increase. In particular, we demonstrate that SFI tests can benefit from parallel execution if such tests are carefully designed and conducted. We furthermore introduce a novel SFI framework to efficiently conduct such experiments. Moreover, we investigate if existing test suites for correctness testing can already benefit from parallel execution and provide an approach that offers a migration path for test suites that have not originally been designed for parallel execution
    corecore