584 research outputs found

    Unification of Transactions and Replication in Three-Tier Architectures Based on CORBA

    Get PDF
    In this paper, we describe a software infrastructure that unifies transactions and replication in three-tier architectures and provides data consistency and high availability for enterprise applications. The infrastructure uses transactions based on the CORBA object transaction service to protect the application data in databases on stable storage, using a roll-backward recovery strategy, and replication based on the fault tolerant CORBA standard to protect the middle-tier servers, using a roll-forward recovery strategy. The infrastructure replicates the middle-tier servers to protect the application business logic processing. In addition, it replicates the transaction coordinator, which renders the two-phase commit protocol nonblocking and, thus, avoids potentially long service disruptions caused by failure of the coordinator. The infrastructure handles the interactions between the replicated middle-tier servers and the database servers through replicated gateways that prevent duplicate requests from reaching the database servers. It implements automatic client-side failover mechanisms, which guarantee that clients know the outcome of the requests that they have made, and retries aborted transactions automatically on behalf of the clients

    Replicated Invocation

    Get PDF
    In today's systems, application are composed from various components that may be located on different machines. The components may have to collaborate in order to service a client request. More specifically, a client request to one component may trigger a request to another component. Moreover, to ensure fault-tolerance, components are generally replicated. This poses the problem of a replicated server invoking another replicated server. We call it the problem of replicated invocation. Replicated invocation has been considered in the context of deterministic servers. However, the problem is more difficult to address when servers are non-deterministic. In this context, work has been done to enforce deterministic execution. In the paper we consider a different approach. Instead of preventing non-deterministic execution of servers, we discuss how to handle it. The paper first discusses the problem of non-deterministic replicated invocation. Then the paper proposes a different solution to solve these problems

    The next database revolution

    Get PDF
    ABSTRACT Database system architectures are undergoing revolutionary changes. Most importantly, algorithms and data are being unified by integrating programming languages with the database system. This gives an extensible object-relational system where nonprocedural relational operators manipulate object sets. Coupled with this, each DBMS is now a web service. This has huge implications for how we structure applications. DBMSs are now object containers. Queues are the first objects to be added. These queues are the basis for transaction processing and workflow applications. Future workflow systems are likely to be built on this core. Data cubes and online analytic processing are now baked into most DBMSs. Beyond that, DBMSs have a framework for data mining and machine learning algorithms. Decision trees, Bayes nets, clustering, and time series analysis are built in; new algorithms can be added. There is a rebirth of column stores for sparse tables and to optimize bandwidth. Text, temporal, and spatial data access methods, along with their probabilistic reasoning have been added to database systems. Allowing approximate and probabilistic answers is essential for many applications. Many believe that XML and xQuery will be the main data structure and access pattern. Database systems must accommodate that perspective. External data increasingly arrives as streams to be compared to historical data; so stream-processing operators are being added to the DBMS. Publish-subscribe systems invert the data-query ratios; incoming data is compared against millions of queries rather than queries searching millions of records. Meanwhile, disk and memory capacities are growing much faster than their bandwidth and latency, so the database systems increasingly use huge main memories and sequential disk access. These changes mandate a much more dynamic query optimization strategy -one that adapts to current conditions and selectivities rather than having a static plan. Intelligence is moving to the periphery of the network. Each disk and each sensor will be a competent database machine. Relational algebra is a convenient way to program these systems. Database systems are now expected to be self-managing, selfhealing, and always-up. We researchers and developers have our work cut out for us in delivering all these features

    Data center resilience assessment : storage, networking and security.

    Get PDF
    Data centers (DC) are the core of the national cyber infrastructure. With the incredible growth of critical data volumes in financial institutions, government organizations, and global companies, data centers are becoming larger and more distributed posing more challenges for operational continuity in the presence of experienced cyber attackers and occasional natural disasters. The main objective of this research work is to present a new methodology for data center resilience assessment, this methodology consists of: • Define Data center resilience requirements. • Devise a high level metric for data center resilience. • Design and develop a tool to validate and the metric. Since computer networks are an important component in the data center architecture, this research work was extended to investigate computer network resilience enhancement opportunities within the area of routing protocols, redundancy, and server load to minimize the network down time and increase the time period of resisting attacks. Data center resilience assessment is a complex process as it involves several aspects such as: policies for emergencies, recovery plans, variation in data center operational roles, hosted/processed data types and data center architectures. However, in this dissertation, storage, networking and security are emphasized. The need for resilience assessment emerged due to the gap in existing reliability, availability, and serviceability (RAS) measures. Resilience as an evaluation metric leads to better proactive perspective in system design and management. The proposed Data center resilience assessment portal (DC-RAP) is designed to easily integrate various operational scenarios. DC-RAP features a user friendly interface to assess the resilience in terms of performance analysis and speed recovery by collecting the following information: time to detect attacks, time to resist, time to fail and recovery time. Several set of experiments were performed, results obtained from investigating the impact of routing protocols, server load balancing algorithms on network resilience, showed that using particular routing protocol or server load balancing algorithm can enhance network resilience level in terms of minimizing the downtime and ensure speed recovery. Also experimental results for investigating the use social network analysis (SNA) for identifying important router in computer network showed that the SNA was successful in identifying important routers. This important router list can be used to redundant those routers to ensure high level of resilience. Finally, experimental results for testing and validating the data center resilience assessment methodology using the DC-RAP showed the ability of the methodology quantify data center resilience in terms of providing steady performance, minimal recovery time and maximum resistance-attacks time. The main contributions of this work can be summarized as follows: • A methodology for evaluation data center resilience has been developed. • Implemented a Data Center Resilience Assessment Portal (D$-RAP) for resilience evaluations. • Investigated the usage of Social Network Analysis to Improve the computer network resilience

    Towards an Enhanced Protocol for Improving Transactional Support in Interoperable Service Oriented Application-Based (SOA-Based) Systems

    Get PDF
    When using a shared database for distributed transactions, it is often difficult to connect business processes and softwarecomponents running on disparate platforms into a single transaction. For instance, one platform may add or update data, and thenanother platform later access the changed or added data. This severely limits transactional capabilities across platforms. Thissituation becomes more acute when concurrent transactions with interleaving operations spans across different applications andresources. Addressing this problem in an open, dynamic and distributed environment of web services poses special challenges,and still remains an open issue. Following the broad adoption and use of the standard Web Services Transaction Protocols,requirements have grown for the addition of extended protocols to handle problems that exist within the context of interoperableservice-oriented applications. Most extensions to the current standard WS-Transaction Protocols still lack proper mechanisms forerror-handling, concurrency control, transaction recovery, consolidation of multiple transaction calls into a single call, and securereporting and tracing for suspicious activities. In this research, we will first extend the current standard WS-TransactionFramework, and then propose an enhanced protocol (that can be deployed within the extended framework) to improvetransactional and security support for asynchronous applications in a distributed environment. A hybrid methodology whichincorporates service-oriented engineering and rapid application development will be used to develop a procurement system(which represents an interoperable service-oriented application) that integrates our proposed protocol. We will empiricallyevaluate and compare the performance of the enhanced protocol with other conventional distributed protocols (such as 2PL) interms of QoS parameters (throughput, response time, and resource utilization), availability of the application, databaseconsistency, and effect of locking on latency, among other factors.Keywords: Database, interoperability, security, concurrent transaction, web services, protocol, service-oriente

    Computing and Information Science

    Full text link
    Cornell University Courses of Study Vol. 102 2010/201
    • …
    corecore