3,242 research outputs found

    Next generation software environments : principles, problems, and research directions

    Get PDF
    The past decade has seen a burgeoning of research and development in software environments. Conferences have been devoted to the topic of practical environments, journal papers produced, and commercial systems sold. Given all the activity, one might expect a great deal of consensus on issues, approaches, and techniques. This is not the case, however. Indeed, the term "environment" is still used in a variety of conflicting ways. Nevertheless substantial progress has been made and we are at least nearing consensus on many critical issues.The purpose of this paper is to characterize environments, describe several important principles that have emerged in the last decade or so, note current open problems, and describe some approaches to these problems, with particular emphasis on the activities of one large-scale research program, the Arcadia project. Consideration is also given to two related topics: empirical evaluation and technology transition. That is, how can environments and their constituents be evaluated, and how can new developments be moved effectively into the production sector

    DBKnot: A Transparent and Seamless, Pluggable Tamper Evident Database

    Get PDF
    Database integrity is crucial to organizations that rely on databases of important data. They suffer from the vulnerability to internal fraud. Database tampering by internal malicious employees with high technical authorization to their infrastructure or even compromised by externals is one of the important attack vectors. This thesis addresses such challenge in a class of problems where data is appended only and is immutable. Examples of operations where data does not change is a) financial institutions (banks, accounting systems, stock market, etc., b) registries and notary systems where important data is kept but is never subject to change, and c) system logs that must be kept intact for performance and forensic inspection if needed. The target of the approach is implementation seamlessness with little-or-no changes required in existing systems. Transaction tracking for tamper detection is done by utilizing a common hashtable that serially and cumulatively hashes transactions together while using an external time-stamper and signer to sign such linkages together. This allows transactions to be tracked without any of the organizations’ data leaving their premises and going to any third-party which also reduces the performance impact of tracking. This is done so by adding a tracking layer and embedding it inside the data workflow while keeping it as un-invasive as possible. DBKnot implements such features a) natively into databases, or b) embedded inside Object Relational Mapping (ORM) frameworks, and finally c) outlines a direction of implementing it as a stand-alone microservice reverse-proxy. A prototype ORM and database layer has been developed and tested for seamlessness of integration and ease of use. Additionally, different models of optimization by implementing pipelining parallelism in the hashing/signing process have been tested in order to check their impact on performance. Stock-market information was used for experimentation with DBKnot and the initial results gave a slightly less than 100% increase in transaction time by using the most basic, sequential, and synchronous version of DBKnot. Signing and hashing overhead does not show significant increase per record with the increased amount of data. A number of different alternate optimizations were done to the design that via testing have resulted in significant increase in performance

    A Framework for Data Sharing in Computer Supported Cooperative Environments

    Get PDF
    Concurrency control is an indispensable part of any information sharing system. Co-operative work introduces new requirements for concurrency control which cannot be met using existing applications and database management systems developed for non-cooperative environments. The emphasis of concurrency control in conventional database management systems is to keep users and their applications from inadvertently corrupting data rather than support a workgroup develop a product together. This insular approach is necessary because applications that access the database have been built with the assumptions that they have exclusive access to the data they manipulate and that users of these applications are generally oblivious of one another. These assumptions, however, are counter to the premise of cooperative work in which human-human interaction is emphasized among a group of users utilizing multiple applications to jointly accomplish a common goal. Consequently, applying conventional approaches to concurrency control are not only inappropriate for cooperative data sharing but can actually hinder group work. Computer support for cooperative work must therefore adopt a fresh approach to concurrency control which does promote group work as much as possible, but without sacrifice of all ability to guarantee system consistency. This research presents a new framework to support data sharing in computer supported cooperative environments; in particular, product development environments where computer support for cooperation among distributed and diverse product developers is essential to boost productivity. The framework is based on an extensible object-oriented data model, where data are represented as a collection of interrelated objects with ancillary attributes used to facilitate cooperation. The framework offers a flexible model of concurrency control, and provides support for various levels of cooperation among product developers and their applications. In addition, the framework enhances group activity by providing the functionality to implement user mediated consistency and to track the progress of group work. In this dissertation, we present the architecture of the framework; we describe the components of the architecture, their operation, and how they interact together to support cooperative data sharing

    Semantically Resolving Type Mismatches in Scientific Workflows

    No full text
    Scientists are increasingly utilizing Grids to manage large data sets and execute scientific experiments on distributed resources. Scientific workflows are used as means for modeling and enacting scientific experiments. Windows Workflow Foundation (WF) is a major component of Microsoft’s .NET technology which offers lightweight support for long-running workflows. It provides a comfortable graphical and programmatic environment for the development of extended BPEL-style workflows. WF’s visual features ease the syntactic composition of Web services into scientific workflows but do nothing to assure that information passed between services has consistent semantic types or representations or that deviant flows, errors and compensations are handled meaningfully. In this paper we introduce SAWSDL-compliant annotations for WF and use them with a semantic reasoner to guarantee semantic type correctness in scientific workflows. Examples from bioinformatics are presented

    Comparative Analysis of Fullstack Development Technologies: Frontend, Backend and Database

    Get PDF
    Accessing websites with various devices has brought changes in the field of application development. The choice of cross-platform, reusable frameworks is very crucial in this era. This thesis embarks in the evaluation of front-end, back-end, and database technologies to address the status quo. Study-a explores front-end development, focusing on angular.js and react.js. Using these frameworks, comparative web applications were created and evaluated locally. Important insights were obtained through benchmark tests, lighthouse metrics, and architectural evaluations. React.js proves to be a performance leader in spite of the possible influence of a virtual machine, opening the door for additional research. Study b delves into backend scripting by contrasting node.js with php. The efficiency of sorting algorithms—binary, bubble, quick, and heap—is the main subject of the research. The performance measurement tool is apache jmeter, and the most important indicator is latency. Study c sheds light on database systems by comparing and contrasting the performance of nosql and sql, with a particular emphasis on mongodb for nosql. In a time of enormous data volumes, reliable technologies are necessary for data management. The five basic database activities that apache jmeter examines are insert, select, update, delete, and aggregate. The performance indicator is the amount of time that has passed. The results showed that the elapsed time for insert operations was significantly faster in nosql than in sql. The p-value for each operation result was less than 0.05, indicating that the performance difference is not significant. The results also showed that the elapsed time of update, delete, select, and aggregate operations are less in nosql than in sql. This suggests that the performance difference between sql and nosql is not significant. These research studies are combined in this thesis to provide a comprehensive understanding of database management, backend programming, and development frameworks. The results provide developers and organisations with the information they need to make wise decisions in this constantly changing environment and satisfy the expectations of a dynamic and diverse technology landscape. INDEX WORDS: Framework, JavaScript, frontend, React.js, Angular.js, Node.js, PHP, Backend, technology, Algorithms, Performance, Apache JMeter, T-test, SQL, NoSQL, Database management systems, Performance comparison, Data operations, Decision-making
    • …
    corecore