188,014 research outputs found

    Software engineering (Encylopedia entry)

    Get PDF

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    Effects of sustained communication time on reliability of JXTA-Overlay P2P platform: a comparison study for two fuzzy-based systems

    Get PDF
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In P2P systems, each peer has to obtain information of other peers and propagate the information to other peers through neighboring peers. Thus, it is important for each peer to have some number of neighbor peers. Moreover, it is more significant to discuss if each peer has reliable neighbor peers. In reality, each peer might be faulty or might send obsolete, even incorrect information to the other peers. We have implemented a P2P platform called JXTA-Orverlay, which defines a set of protocols that standardize how different devices may communicate and collaborate among them. JXTA-Overlay provides a set of basic functionalities, primitives, intended to be as complete as possible to satisfy the needs of most JXTA-based applications. In this paper, we present two fuzzy-based systems (called FPRS1 and FPRS2) to improve the reliability of JXTA-Overlay P2P platform. We make a comparison study between the fuzzy-based reliability systems. Comparing the complexity of FPRS1 and FPRS2, the FPRS2 is more complex than FPRS1. However, it considers also the sustained communication time which makes the platform more reliable.Peer ReviewedPostprint (author's final draft

    Control, Process Facilitation, and Requirements Change in Offshore Requirements Analysis: The Provider Perspective

    Get PDF
    Process, technology, and project factors have been increasingly driving organizations to offshore early software development phases, such as requirements analysis. This emerging trend necessitates greater control and process facilitation between client and vendor sites. The effectiveness of control and facilitation has, however, not been examined within the context of requirements analysis and change. In this study, we examine the role of control and facilitation in managing changing requirements and on success of requirements gathering in the Indian offshore software development environment. Firms found that control by client-site coordinators had a positive impact on requirements analysis success while vender site-coordinators did not have similar influence. Process facilitation by client site-coordinators affected requirements phase success indirectly through control. The study concludes with recommendations for research and practice

    Towards a Framework for Developing Mobile Agents for Managing Distributed Information Resources

    No full text
    Distributed information management tools allow users to author, disseminate, discover and manage information within large-scale networked environments, such as the Internet. Agent technology provides the flexibility and scalability necessary to develop such distributed information management applications. We present a layered organisation that is shared by the specific applications that we build. Within this organisation we describe an architecture where mobile agents can move across distributed environments, integrate with local resources and other mobile agents, and communicate their results back to the user

    Automatic instantiation of abstract tests on specific configurations for large critical control systems

    Full text link
    Computer-based control systems have grown in size, complexity, distribution and criticality. In this paper a methodology is presented to perform an abstract testing of such large control systems in an efficient way: an abstract test is specified directly from system functional requirements and has to be instantiated in more test runs to cover a specific configuration, comprising any number of control entities (sensors, actuators and logic processes). Such a process is usually performed by hand for each installation of the control system, requiring a considerable time effort and being an error prone verification activity. To automate a safe passage from abstract tests, related to the so called generic software application, to any specific installation, an algorithm is provided, starting from a reference architecture and a state-based behavioural model of the control software. The presented approach has been applied to a railway interlocking system, demonstrating its feasibility and effectiveness in several years of testing experience
    • …
    corecore