886 research outputs found

    On-line monitoring for operational control of water distribution networks

    Get PDF
    This work concerns the concept of on-line monitoring and control for water distribution networks. The problem is simple to state. It is to produce a robust scheme that can continuously provide reliable information about the state of a water network in real-time and over extended periods with the minimum of operator interaction. This thesis begins by proposing a relational database schema to store 'asset data' for a water distribution network and asserts that asset data should be used as a basis for network modelling. It presents a topology determination algorithm and a demand allocation algorithm so that a mathematical model can be maintained on-line, with operator intervention only necessary to record the change of state of non-telemetered plant items such as switch valves. In order to provide a reliable on-line model of a distribution system, an investigation has been carried out into the methods available for modelling water networks and in particular, the inherent assumptions in these practices. As a result, new methods have been produced for network element combination and for demand allocation. These methods both support the database approach and enhance the robustness of the system by increasing the range of conditions for which the resulting model is applicable. For operational control, a new technique for state estimation is proposed which combines the advantages of weighted least squares estimation with those of weighted least absolute values estimation. The proposed method is tolerant to transducer noise and to the presence of large measurement outliers. However, the method is not limited in its application to water networks and could be applied to a wide range of measurement processing problems. Lastiy, a new topology based method for processing suspect data is proposed which can determine the likely causes using identifying templates. Thus a new approach to water network monitoring is proposed via an overall framework into which the various tasks of on-line operational control can be integrated. The exercise has resulted in the production of a core software package which could realistically be used in a control room to facilitate reliable operational control of water distribution systems

    Verifying and Enforcing Application Constraints in Antidote SQL

    Get PDF
    Geo-replicated storage systems are currently a fundamental piece in the development of large-scale applications where users are distributed across the world. To meet the high requirements regarding la- tency and availability of these applications, these database systems are forced to use weak consistency mechanisms. However, under these consistency models, there is no guarantee that the invariants are preserved, which can jeopardise the correctness of applications. The most obvious alternative to solve this problem would be to use strong consistency, but this would place a large burden on the system. Since neither of these options was feasible, many systems have been developed to preserve the invariants of the applications without sacrificing low latency and high availability. These systems, based on the analysis of operations, make it possible to increase the guarantees of weak consistency by introducing consistency at the level of operations that are potentially dangerous to the invariant. Antidote SQL is a database system that, by combining strong with weak consistency mechanisms, attempts to guarantee the preservation of invariants at the data level. In this way, and after defining the concurrency semantics for the application, any operation can be performed without coordination and without the risk of violating the invariant. However, this approach has some limitations, namely the fact that it is not trivial for developers to define appropriate concurrency semantics. In this document, we propose a methodology for the verification and validation of defined prop- erties, such as invariants, for applications using Antidote SQL. The proposed methodology uses a high-level programming language with automatic verification features called VeriFx and provides guidelines for programmers who wish to implement and verify their own systems and specifications using this tool.Os sistemas de armazenamento geo-replicados são atualmente uma peça fundamental no desenvolvi- mento de aplicações de grande escala em que os utilizadores se encontram espalhados pelo mundo. Com o objetivo de satisfazer os elevados requisitos em relação à latência e à disponibilidade destas aplicações, estes sistemas de bases de dados vêem-se obrigados a recorrer a mecanismos de consistên- cia fracos. No entanto, sob estes modelos de consistência não existe qualquer tipo de garantia de que os invariantes são preservados, o que pode colocar em causa a correção das aplicações. A alternativa mais óbvia para resolver este problema passaria por utilizar consistência forte, no entanto esta incutiria uma grande sobrecarga no sistema. Sendo que nenhuma destas opções é viável, muitos sistemas foram desenvolvidos no sentido de preservar os invariantes das aplicações, sem contudo, abdicar de baixas latências e alta disponibilidade. Estes sistemas, baseados na análise das operações, permitem aumentar as garantias de consistência fraca com a introdução de consistência ao nível das operações potencialmente perigosas para o invari- ante. O Antidote SQL é um sistema de base de dados que através da combinação de mecanismos de consistência fortes com mecanismos de consistência fracos tenta garantir a preservação dos invariantes ao nível dos dados. Desta forma, e depois de definidas as semânticas de concorrência para a aplicação, qualquer operação pode ser executada sem coordenação e sem perigo de quebra do invariante. No entanto esta abordagem apresenta algumas limitações nomeadamente o facto de não ser trivial para os programadores definirem as semânticas de concorrência adequadas. Neste documento propomos uma metodologia para a verificação e validação de propriedades defi- nidas, como os invariantes, para aplicações que usam o Antidote SQL. A metodologia proposta utiliza uma linguagem de programação de alto nível com capacidade de verificação automática designada por VeriFx, e fornece as diretrizes a seguir para que o programador consiga implementar e verificar os seus próprios sistemas e especificações, utilizando a ferramenta

    Trustworthy repositories: Audit and certification (TRAC) Cline Library internal audit, spring 2014

    Get PDF
    Audit and Certification of Trustworthy Digital Repositories (TRAC) is a recommended practice developed by the Consultative Committee for Space Data Systems. The TRAC international standard (ISO 16363:2012) provides institutions with guidelines for performing internal audits to evaluate the trustworthiness of digital repositories, and creates a structure to support external certification of repositories. TRAC establishes criteria, evidence, best practices and controls that digital repositories can use to assess their activities in the areas of organizational infrastructure, digital object management, and technical infrastructure and risk management. The Cline Library at Northern Arizona University has undertaken an internal audit based on TRAC in order to evaluate the policies, procedures and workflows of the existing digital archives and to prepare for the development and implementation of the proposed institutional repository. The following document provides an overview of the results and recommendations produced by this internal audit

    Cyber Security and Critical Infrastructures 2nd Volume

    Get PDF
    The second volume of the book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles, including an editorial that explains the current challenges, innovative solutions and real-world experiences that include critical infrastructure and 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems

    Evolving a secure grid-enabled, distributed data warehouse : a standards-based perspective

    Get PDF
    As digital data-collection has increased in scale and number, it becomes an important type of resource serving a wide community of researchers. Cross-institutional data-sharing and collaboration introduce a suitable approach to facilitate those research institutions that are suffering the lack of data and related IT infrastructures. Grid computing has become a widely adopted approach to enable cross-institutional resource-sharing and collaboration. It integrates a distributed and heterogeneous collection of locally managed users and resources. This project proposes a distributed data warehouse system, which uses Grid technology to enable data-access and integration, and collaborative operations across multi-distributed institutions in the context of HV/AIDS research. This study is based on wider research into OGSA-based Grid services architecture, comprising a data-analysis system which utilizes a data warehouse, data marts, and near-line operational database that are hosted by distributed institutions. Within this framework, specific patterns for collaboration, interoperability, resource virtualization and security are included. The heterogeneous and dynamic nature of the Grid environment introduces a number of security challenges. This study also concerns a set of particular security aspects, including PKI-based authentication, single sign-on, dynamic delegation, and attribute-based authorization. These mechanisms, as supported by the Globus Toolkit’s Grid Security Infrastructure, are used to enable interoperability and establish trust relationship between various security mechanisms and policies within different institutions; manage credentials; and ensure secure interactions

    Utilizing AI/ML methods for measuring data quality

    Get PDF
    Kvalitní data jsou zásadní pro důvěryhodná rozhodnutí na datech založená. Značná část současných přístupů k měření kvality dat je spojena s náročnou, odbornou a časově náročnou prací, která vyžaduje manuální přístup k dosažení odpovídajících výsledků. Tyto přístupy jsou navíc náchylné k chybám a nevyužívají plně potenciál umělé inteligence (AI). Možným řešením je prozkoumat inovativní nové metody založené na strojovém učení (ML), které využívají potenciál AI k překonání těchto problémů. Významná část práce se zabývá teorií kvality dat, která poskytuje komplexní vhled do této oblasti. V existující literatuře byly objeveny čtyři moderní metody založené na ML a byla navržena jedna nová metoda založená na autoenkodéru (AE). Byly provedeny experimenty s AE a dolováním asociačních pravidel za pomoci metod zpracování přirozeného jazyka. Navrhované metody založené na AE prokázaly schopnost detekce potenciálních problémů s kvalitou dat na datasetech z reálného světa. Dolování asociačních pravidel dokázalo extrahovat byznys pravidla pro stanovený problém, ale vyžadovalo značné úsilí s předzpracováním dat. Alternativní metody nezaložené na AI byly také podrobeny analýze, ale vyžadovaly odborné znalosti daného problému a domény.High-quality data is crucial for trusted data-based decisions. A considerable part of current data quality measuring approaches is associated with expensive, expert and time-consuming work that includes manual effort to achieve adequate results. Furthermore, these approaches are prone to error and do not take full advantage of the AI potential. A possible solution is to explore ML-based state-of-the-art methods that are using the potential of AI to overcome these issues. A significant part of the thesis deals with data quality theory which provides a comprehensive insight into the field of data quality. Four ML-based state-of-the-art methods were discovered in the existing literature, and one novel method based on Autoencoders (AE) was proposed. Experiments with AE and Association Rule Mining using NLP were conducted. Proposed methods based on AE proved to detect potential data quality defects in real-world datasets. Association Rule Mining approach was able to extract business rules for a given business question, but the required significant preprocessing effort. Alternative non-AI methods were also analyzed but required reliance on expert and domain knowledge

    A semantic and agent-based approach to support information retrieval, interoperability and multi-lateral viewpoints for heterogeneous environmental databases

    Get PDF
    PhDData stored in individual autonomous databases often needs to be combined and interrelated. For example, in the Inland Water (IW) environment monitoring domain, the spatial and temporal variation of measurements of different water quality indicators stored in different databases are of interest. Data from multiple data sources is more complex to combine when there is a lack of metadata in a computation forin and when the syntax and semantics of the stored data models are heterogeneous. The main types of information retrieval (IR) requirements are query transparency and data harmonisation for data interoperability and support for multiple user views. A combined Semantic Web based and Agent based distributed system framework has been developed to support the above IR requirements. It has been implemented using the Jena ontology and JADE agent toolkits. The semantic part supports the interoperability of autonomous data sources by merging their intensional data, using a Global-As-View or GAV approach, into a global semantic model, represented in DAML+OIL and in OWL. This is used to mediate between different local database views. The agent part provides the semantic services to import, align and parse semantic metadata instances, to support data mediation and to reason about data mappings during alignment. The framework has applied to support information retrieval, interoperability and multi-lateral viewpoints for four European environmental agency databases. An extended GAV approach has been developed and applied to handle queries that can be reformulated over multiple user views of the stored data. This allows users to retrieve data in a conceptualisation that is better suited to them rather than to have to understand the entire detailed global view conceptualisation. User viewpoints are derived from the global ontology or existing viewpoints of it. This has the advantage that it reduces the number of potential conceptualisations and their associated mappings to be more computationally manageable. Whereas an ad hoc framework based upon conventional distributed programming language and a rule framework could be used to support user views and adaptation to user views, a more formal framework has the benefit in that it can support reasoning about the consistency, equivalence, containment and conflict resolution when traversing data models. A preliminary formulation of the formal model has been undertaken and is based upon extending a Datalog type algebra with hierarchical, attribute and instance value operators. These operators can be applied to support compositional mapping and consistency checking of data views. The multiple viewpoint system was implemented as a Java-based application consisting of two sub-systems, one for viewpoint adaptation and management, the other for query processing and query result adjustment

    Breakdown of category-specific word representations in a brain-constrained neurocomputational model of semantic dementia

    Get PDF
    The neurobiological nature of semantic knowledge, i.e., the encoding and storage of conceptual information in the human brain, remains a poorly understood and hotly debated subject. Clinical data on semantic deficits and neuroimaging evidence from healthy individuals have suggested multiple cortical regions to be involved in the processing of meaning. These include semantic hubs (most notably, anterior temporal lobe, ATL) that take part in semantic processing in general as well as sensorimotor areas that process specific aspects/categories according to their modality. Biologically inspired neurocomputational models can help elucidate the exact roles of these regions in the functioning of the semantic system and, importantly, in its breakdown in neurological deficits. We used a neuroanatomically constrained computational model of frontotemporal cortices implicated in word acquisition and processing, and adapted it to simulate and explain the effects of semantic dementia (SD) on word processing abilities. SD is a devastating, yet insufficiently understood progressive neurodegenerative disease, characterised by semantic knowledge deterioration that is hypothesised to be specifically related to neural damage in the ATL. The behaviour of our brain-based model is in full accordance with clinical data—namely, word comprehension performance decreases as SD lesions in ATL progress, whereas word repetition abilities remain less affected. Furthermore, our model makes predictions about lesion- and category-specific effects of SD: our simulation results indicate that word processing should be more impaired for object- than for action-related words, and that degradation of white matter should produce more severe consequences than the same proportion of grey matter decay. In sum, the present results provide a neuromechanistic explanatory account of cortical-level language impairments observed during the onset and progress of semantic dementia

    Knowledge elicitation, semantics and inference

    Get PDF
    corecore