399,594 research outputs found

    Statistical structures for internet-scale data management

    Get PDF
    Efficient query processing in traditional database management systems relies on statistics on base data. For centralized systems, there is a rich body of research results on such statistics, from simple aggregates to more elaborate synopses such as sketches and histograms. For Internet-scale distributed systems, on the other hand, statistics management still poses major challenges. With the work in this paper we aim to endow peer-to-peer data management over structured overlays with the power associated with such statistical information, with emphasis on meeting the scalability challenge. To this end, we first contribute efficient, accurate, and decentralized algorithms that can compute key aggregates such as Count, CountDistinct, Sum, and Average. We show how to construct several types of histograms, such as simple Equi-Width, Average-Shifted Equi-Width, and Equi-Depth histograms. We present a full-fledged open-source implementation of these tools for distributed statistical synopses, and report on a comprehensive experimental performance evaluation, evaluating our contributions in terms of efficiency, accuracy, and scalability

    Why audit?

    Get PDF
    Decision making in surgery is based on contemporary hard data describing outcomes in a particular patient population. As a professional body, with powers of self regulation and peer review, we need to be cognisant of the expected norm of practice. This can only be derived from information that is shared amongst our colleagues both locally and abroad. We have the responsibility to contribute to this database by way of audit in a rigourous and honest fashion and to utilise it routinely in the management of our patients.peer-reviewe

    A secure data outsourcing scheme based on Asmuth – Bloom secret sharing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Data outsourcing is an emerging paradigm for data management in which a database is provided as a service by third-party service providers. One of the major benefits of offering database as a service is to provide organisations, which are unable to purchase expensive hardware and software to host their databases, with efficient data storage accessible online at a cheap rate. Despite that, several issues of data confidentiality, integrity, availability and efficient indexing of users’ queries at the server side have to be addressed in the data outsourcing paradigm. Service providers have to guarantee that their clients’ data are secured against internal (insider) and external attacks. This paper briefly analyses the existing indexing schemes in data outsourcing and highlights their advantages and disadvantages. Then, this paper proposes a secure data outsourcing scheme based on Asmuth–Bloom secret sharing which tries to address the issues in data outsourcing such as data confidentiality, availability and order preservation for efficient indexing

    The Institutional Repository route to Open Access: implications for its evolution

    No full text
    Open access to peer reviewed journal articles is one of the key messages of the current global movement that is changing the paradigm of scholarly communication. Creating open access journals is one such route and creating institutional repositories containing author generated electronic text is another complementary alternative. In the UK, the FAIR (Focus on Access to Institutional Resources) programme of research is based on the vision of open access. Experiments in setting up an institutional repository for academic research output at the University of Southampton have emphasized that the institutional repository agenda is broader and that academic needs may dictate a more expanded database model than the pioneering discipline based e-Prints archive known as ‘arXiv’. The institution is represented by a broad range of publication types including, but not exclusively, peer reviewed journal articles and the different disciplines have evolved different recording practices. Full text deposits may provide the opportunity for added value elements – e.g. enhanced diagrams, additional data or presentations – if the database provides the capability. The repository may provide the building blocks for effective management of collaborative e-research. Academic institutions that impose research reporting in an institutional repository require full recording of publications including those where obtaining full text is difficult or inappropriate. A practical route is, therefore, to develop an institutional repository which is ’hybrid’ – containing both records and full text where achievable. In this scenario, the technical and management issues eg authentication and quality assurance of the metadata generation may become more complex. However, the full text element can grow as the practice becomes more natural within the recording process and as copyright restrictions ease. In the UK, several factors including the Research Assessment Exercise and citation impact measures based on increasing open access could also help encourage this change. The goal of providing open access to peer reviewed research items may, therefore, come about by a more circuitous but, in the end, more effective route. The ‘hybrid’ library will have evolved to the digital library of the ideal

    DELIVERING OF DATA SERVICES BY EMPLOYING CLOUD BASED PLATFORM FOR CORPORATE NETWORKS

    Get PDF
    For improvisation of traditional works of peer to peer systems, there have been several works and proposed a series of database management systems that are peer to peer based. Database management systems that are peer to peer based are formed by combining modern database methods into peer to peer systems. We make available a data sharing method that is cloud based and provided for corporate networks and this method is put together by means of the technique of cloud computing, as well as peer to peer system. The proposed system is a promising method for commercial applications of network system and contains numerous significant features. The proposed structure is positioned as a cloud service and the system is mostly accepted by cloud platform and it moreover utilizes a technique of peer to peer for the data recovered among the associates of business

    AMC Native WebRTC Client

    Get PDF
    Traditional call center and telecommunication hardware is being replaced by thin, browser-based, cloud enabled web services. Industry standards for web based communication protocols, such as WebRTC, are being established. AMC needed to address this new technology, while maintaining a hybrid approach of server-based capabilities, taking advantage of the web-based communication channel, while broadcasting events to the Contact Canvas Server. Contact Canvas Agent Palette is the editing platform of the AMC adapter for Salesforce.com, allowing agents to communicate with customers through the AMC adapter/ Softphone. Using Agent Palette, the task was to integrate Video Chat using WebRTC into the AMC toolbar. Two agents use a peer-to-peer connection to establish communication with one another. The connected two can communicate through video chat which supports screen pop. The components that were provided and used were the AMC adapter for salesforce.com, the Agent Palette, and the salesforce.com Customer Relation Management (CRM) database. The AMC adapter is an HTML Softphone that can be used to voice enable salesforce.com, while Socket.io and Node.js were used to communicate with the server side. Eventually this video chat will advance to the point where communication will be established between agents and their customers.https://scholarscompass.vcu.edu/capstone/1162/thumbnail.jp

    Secure Distributed Dynamic State Estimation in Wide-Area Smart Grids

    Full text link
    Smart grid is a large complex network with a myriad of vulnerabilities, usually operated in adversarial settings and regulated based on estimated system states. In this study, we propose a novel highly secure distributed dynamic state estimation mechanism for wide-area (multi-area) smart grids, composed of geographically separated subregions, each supervised by a local control center. We firstly propose a distributed state estimator assuming regular system operation, that achieves near-optimal performance based on the local Kalman filters and with the exchange of necessary information between local centers. To enhance the security, we further propose to (i) protect the network database and the network communication channels against attacks and data manipulations via a blockchain (BC)-based system design, where the BC operates on the peer-to-peer network of local centers, (ii) locally detect the measurement anomalies in real-time to eliminate their effects on the state estimation process, and (iii) detect misbehaving (hacked/faulty) local centers in real-time via a distributed trust management scheme over the network. We provide theoretical guarantees regarding the false alarm rates of the proposed detection schemes, where the false alarms can be easily controlled. Numerical studies illustrate that the proposed mechanism offers reliable state estimation under regular system operation, timely and accurate detection of anomalies, and good state recovery performance in case of anomalies

    Current State of JUMMP

    Get PDF
    In computational biology there is a strong need to exchange quantitative models of biological processes in a standardized way. For storing and retrieving peer-reviewed models the BioModels Database has been available for several years. But at the moment there is no tool available to bring a model from development in the lab directly to the peer-reviewed online resource.

The JUMMP (JUst a Model Management Platform) project aims at providing a generic model management platform for any standardized model file. Through a well-elaborated security layer models can be developed and shared privately and later on be made available to the curation process and the broader community. Each change to a model is recorded in a version control system allowing to easily track changes during the development.

In this presentation the current state of development of the JUMMP project is discussed as well as the future steps which are needed to get JUMMP into a shape so that it can be used as the infrastructure for the BioModels Database
    corecore