10,765 research outputs found

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future

    Distributed Concurrent Persistent Languages: An Experimental Design and Implementation

    Get PDF
    A universal persistent object store is a logical space of persistent objects whose localities span over machines reachable over networks. It provides a conceptual framework in which, on one hand, the distribution of data is transparent to application programmers and, on the other, store semantics of conventional languages is preserved. This means the manipulation of persistent objects on remote machines is both syntactically and semantically the same as in the case of local data. Consequently, many aspects of distributed programming in which computation tasks cooperate over different processors and different stores can be addressed within the confines of persistent programming. The work reported in this thesis is a logical generalization of the notion of persistence in the context of distribution. The concept of a universal persistent store is founded upon a universal addressing mechanism which augments existing addressing mechanisms. The universal addressing mechanism is realized based upon remote pointers which although containing more locality information than ordinary pointers, do not require architectural changes. Moreover, these remote pointers are transparent to the programmers. A language, Distributed PS-algol, is designed to experiment with this idea. The novel features of the language include: lightweight processes with a flavour of distribution, mutexes as the store-based synchronization primitive, and a remote procedure call mechanism as the message-based interprocess communication mechanism. Furthermore, the advantages of shared store programming and network architecture are obtained with the introduction of the programming concept of locality in an unobtrusive manner. A characteristic of the underlying addressing mechanism is that data are never copied to satisfy remote demands except where efficiency can be attained without compromising the semantics of data. A remote store operation model is described to effect remote updates. It is argued that such a choice is the most natural given that remote store operations resemble remote procedure calls

    Realizing Adaptive Process-aware Information Systems with ADEPT2

    Get PDF
    In dynamic environments it must be possible to quickly implement new business processes, to enable ad-hoc deviations from the defined business processes on-demand (e.g., by dynamically adding, deleting or moving process activities), and to support dynamic process evolution (i.e., to propagate process schema changes to already running process instances). These fundamental requirements must be met without affecting process consistency and robustness of the process-aware information system. In this paper we describe how these challenges have been addressed in the ADEPT2 process management system. Our overall vision is to provide a next generation technology for the support of dynamic processes, which enables full process lifecycle management and which can be applied to a variety of application domains

    The Vadalog System: Datalog-based Reasoning for Knowledge Graphs

    Full text link
    Over the past years, there has been a resurgence of Datalog-based systems in the database community as well as in industry. In this context, it has been recognized that to handle the complex knowl\-edge-based scenarios encountered today, such as reasoning over large knowledge graphs, Datalog has to be extended with features such as existential quantification. Yet, Datalog-based reasoning in the presence of existential quantification is in general undecidable. Many efforts have been made to define decidable fragments. Warded Datalog+/- is a very promising one, as it captures PTIME complexity while allowing ontological reasoning. Yet so far, no implementation of Warded Datalog+/- was available. In this paper we present the Vadalog system, a Datalog-based system for performing complex logic reasoning tasks, such as those required in advanced knowledge graphs. The Vadalog system is Oxford's contribution to the VADA research programme, a joint effort of the universities of Oxford, Manchester and Edinburgh and around 20 industrial partners. As the main contribution of this paper, we illustrate the first implementation of Warded Datalog+/-, a high-performance Datalog+/- system utilizing an aggressive termination control strategy. We also provide a comprehensive experimental evaluation.Comment: Extended version of VLDB paper <https://doi.org/10.14778/3213880.3213888

    Category theory for scientists (Old version)

    Full text link
    There are many books designed to introduce category theory to either a mathematical audience or a computer science audience. In this book, our audience is the broader scientific community. We attempt to show that category theory can be applied throughout the sciences as a framework for modeling phenomena and communicating results. In order to target the scientific audience, this book is example-based rather than proof-based. For example, monoids are framed in terms of agents acting on objects, sheaves are introduced with primary examples coming from geography, and colored operads are discussed in terms of their ability to model self-similarity. A new version with solutions to exercises will be available through MIT Press.Comment: 267 pages, 5 chapters, 280 exercises, an index. This book was written as course notes for a special subjects Math class at MIT called "18-S996: Category Theory for scientists", taught in Spring 2013. The class had a diverse enrollment: At the end, the number of registered students was 18 = 7 undergrad + 11 grad = 5 math + 4 EECS + 3 physics + 3 engineering + 3 othe

    Recover Data about Detected Defects of Underground Metal Elements of Constructions in Amazon Elasticsearch Service

    Get PDF
    This paper examines data manipulation in terms of data recovery using cloud computing and search engine. Accidental deletion or problems with the remote service cause information loss. This case has unpredictable consequences, as data must be re-collected. In some cases, this is not possible due to system features. The primary purpose of this work is to offer solutions for received data on detected defects of underground metal structural elements using modern information technologies.The main factors that affect underground metal structural elements' durability are the soil environment's external action and constant maintenance-free use. Defects can usually occur in several places, so control must be carried out along the entire length of the underground network. To avoid the loss of essential data, approaches for recovery using Amazon Web Service and a developed web service based on the REST architecture are considered. The general algorithm of the system is proposed in work to collect and monitor data on defects of underground metal structural elements. The result of the study for the possibility of data recovery using automatic snapshots or backup data duplication for the developed system.
    • …
    corecore