23,321 research outputs found

    Reporting methodological search filter performance comparisons : a literature review

    Get PDF
    © 2014 The authors. Health Information and Libraries Journal © 2014 Health Libraries Journal.Peer reviewedPostprin

    Maintaining consistency in distributed systems

    Get PDF
    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability

    Rigorous Design of Distributed Transactions

    No full text
    Database replication is traditionally envisaged as a way of increasing fault-tolerance and availability. It is advantageous to replicate the data when transaction workload is predominantly read-only. However, updating replicated data within a transactional framework is a complex affair due to failures and race conditions among conflicting transactions. This thesis investigates various mechanisms for the management of replicas in a large distributed system, formalizing and reasoning about the behavior of such systems using Event-B. We begin by studying current approaches for the management of replicated data and explore the use of broadcast primitives for processing transactions. Subsequently, we outline how a refinement based approach can be used for the development of a reliable replicated database system that ensures atomic commitment of distributed transactions using ordered broadcasts. Event-B is a formal technique that consists of describing rigorously the problem in an abstract model, introducing solutions or design details in refinement steps to obtain more concrete specifications, and verifying that the proposed solutions are correct. This technique requires the discharge of proof obligations for consistency checking and refinement checking. The B tools provide significant automated proof support for generation of the proof obligations and discharging them. The majority of the proof obligations are proved by the automatic prover of the tools. However, some complex proof obligations require interaction with the interactive prover. These proof obligations also help discover new system invariants. The proof obligations and the invariants help us to understand the complexity of the problem and the correctness of the solutions. They also provide a clear insight into the system and enhance our understanding of why a design decision should work. The objective of the research is to demonstrate a technique for the incremental construction of formal models of distributed systems and reasoning about them, to develop the technique for the discovery of gluing invariants due to prover failure to automatically discharge a proof obligation and to develop guidelines for verification of distributed algorithms using the technique of abstraction and refinement

    A platform for discovering and sharing confidential ballistic crime data.

    Get PDF
    Criminal investigations generate large volumes of complex data that detectives have to analyse and understand. This data tends to be "siloed" within individual jurisdictions and re-using it in other investigations can be difficult. Investigations into trans-national crimes are hampered by the problem of discovering relevant data held by agencies in other countries and of sharing those data. Gun-crimes are one major type of incident that showcases this: guns are easily moved across borders and used in multiple crimes but finding that a weapon was used elsewhere in Europe is difficult. In this paper we report on the Odyssey Project, an EU-funded initiative to mine, manipulate and share data about weapons and crimes. The project demonstrates the automatic combining of data from disparate repositories for cross-correlation and automated analysis. The data arrive from different cultural/domains with multiple reference models using real-time data feeds and historical databases

    WARP Business Intelligence System

    Get PDF
    Continuous delivery (CD) facilitates the software releasing process. Because the use of continuous integration and deployment pipelines, allows software to be tested several times before going into production. In Business Intelligence (BI), software releases tend to be manual and deprived of pipelines, versions control might also be deficient because of the project nature, which involves data and it’s impossible to version. How to apply CD concepts to BI to an existing project where legacy code is extended and there is no version control over project objects? Only few organizations have an automated release process for their BI projects. Because due to projects nature it is difficult to implement CD to the full extent. Thus, the problem was tackled in stages, first the implementation of version control, that works for the organization, then the establishment of the necessary environments to proceed with the pipelines and finally the creation of a test pipeline for one of the BI projects, proving the success of this approach. To evaluate the success of this solution the main beneficiaries (stakeholders and engineers) were asked to answer some questionnaires regarding their experience with the data warehouse before and after the use of CD. Because each release is tested before going into production, the use of CD will improve software quality in the long run as well as it allows software to be released more frequently.Continuous Delivery (CD) permite que as releases de software aconteçam em qualquer momento sem problemas associados, utilizando pipelines de integração e de deployment. Desta forma, o software Ă© testado vĂĄrias vezes antes de ser instalado em produção. Em Business Intelligence (BI), as releases sĂŁo tendencialmente manuais, sem pipelines e devido Ă  natureza do projecto (dados) o controlo de versĂ”es tende a ser inexistente. Como aplicar o conceito de CD num contexto de BI a projetos de grandes dimensĂ”es, com legacy code extenso e sem controlo de versĂ”es? Apenas algumas organizaçÔes tĂȘm um processo automĂĄtico de releases para os seus projectos de BI, porque devido Ă  natureza dos projetos que envolvem dados, Ă© difĂ­cil implementar CD. Tendo em conta os estes factores, o problema foi abordado por etapas, em primeiro lugar procedeu-se Ă  implementação de um controlo de versĂ”es, que se adapte Ă s necessidades da organização. O passo seguinte foi a criação do ambiente necessĂĄrio para prosseguir com a instalação de pipelines e para terminar, a terceira etapa, consistiu na criação de uma pipeline de teste para um dos projectos de BI, comprovando assim o sucesso da solução proposta. Para avaliar o sucesso desta solução os principais beneficiĂĄrios (stakeholders e engenheiros) foram convidados a preencher questionĂĄrios, que permitem avaliar a sua experiĂȘncia com o data warehouse antes e depois da utilização da solução proposta neste trabalho. Como cada release Ă© testada antes de ser instalada em produção, garantindo que possĂ­veis erros jĂĄ foram encontrados previamente, o uso de CD melhorarĂĄ a qualidade do software a longo prazo e permitirĂĄ que as releases ocorram com mais frequĂȘncia

    Knowledge management : a learning mix perspective

    Get PDF
    How can an organization define policy for managing its knowledge ? In this article, an integrative model is proposed : the Learning Mix. It consists of 4 interacting facets: Information Technology, Learning Structure, Knowledge Portfolio and Learning Identity.knowledge management; learning; learning model

    Knowledge Transfer Needs and Methods

    Get PDF
    INE/AUTC 12.3

    Log file analysis for disengagement detection in e-Learning environments

    Get PDF
    • 

    corecore