11 research outputs found

    Privacy and data protection in Australia : a critical overview (extended abstract)

    Get PDF
    This research is funded by the Data to Decisions Cooperative Research Centre (D2D CRC), Project C, with participation of the Spanish Project DER2016-78108-P.This extended abstract describes the regulation of privacy under Australian laws and policies. In the CRC D2D programme, we will develop a strategy to model legal requirements in a situation that is far from clear. Law enforcement agencies are facing big floods of data to be acquired, stored, assessed and used. We will propose in the final paper a linked data regulatory model to organise and set the legal and policy requirements to model privacy in this unstructured context

    Legal compliance through design : preliminary results of a literature survey

    Get PDF
    Proceedings of the 2nd Workshop on Technologies for Regulatory Compliance (TERECOM 2018), Groningen, The Netherlands, December 12, 2018.In this paper we present the preliminary results of a literature survey conducted in the context of a larger research project on legal compliance by design (LCbD) and legal compliance through design (LCtD). Even though a rich set of approaches and frameworks are available, our analysis shows that there is less focus on legal compliance in general, and LCbD and LCtD in particular. The technical literature on compliance has been concentrated on specific aspects of the law, i.e. mainly on those related to corporate and administrative management (including those of law firms and government). Other legal dimensions such as public law, case law, constitutional, virtual ethics etc., have been put aside

    Linked democracy : foundations, tools, and applications

    Get PDF
    Chapter 1Introduction to Linked DataAbstractThis chapter presents Linked Data, a new form of distributed data on theweb which is especially suitable to be manipulated by machines and to shareknowledge. By adopting the linked data publication paradigm, anybody can publishdata on the web, relate it to data resources published by others and run artificialintelligence algorithms in a smooth manner. Open linked data resources maydemocratize the future access to knowledge by the mass of internet users, eitherdirectly or mediated through algorithms. Governments have enthusiasticallyadopted these ideas, which is in harmony with the broader open data movement

    Linked Democracy

    Get PDF
    This open access book shows the factors linking information flow, social intelligence, rights management and modelling with epistemic democracy, offering licensed linked data along with information about the rights involved. This model of democracy for the web of data brings new challenges for the social organisation of knowledge, collective innovation, and the coordination of actions. Licensed linked data, licensed linguistic linked data, right expression languages, semantic web regulatory models, electronic institutions, artificial socio-cognitive systems are examples of regulatory and institutional design (regulations by design). The web has been massively populated with both data and services, and semantically structured data, the linked data cloud, facilitates and fosters human-machine interaction. Linked data aims to create ecosystems to make it possible to browse, discover, exploit and reuse data sets for applications. Rights Expression Languages semi-automatically regulate the use and reuse of content. ; Links information flow, social intelligence, rights management, and modelling with epistemic democracy Presents examples of regulatory and institutional desig

    JURI SAYS:An Automatic Judgement Prediction System for the European Court of Human Rights

    Get PDF
    In this paper we present the web platform JURI SAYS that automatically predicts decisions of the European Court of Human Rights based on communicated cases, which are published by the court early in the proceedings and are often available many years before the final decision is made. Our system therefore predicts future judgements of the court. The platform is available at jurisays.com and shows the predictions compared to the actual decisions of the court. It is automatically updated every month by including the prediction for the new cases. Additionally, the system highlights the sentences and paragraphs that are most important for the prediction (i.e. violation vs. no violation of human rights)

    Methods and tools for analysis and management of risks and regulatory compliance in the healthcare sector: the Hospital at Home – HaH

    Get PDF
    Changing or creating a new organization means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural risks and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Therefore, managing the risks implies proposing changes to the processes that allow the desired result: an optimized process. In order to manage a company and optimize it in the best possible way, not only should the organizational aspect, risk management and legal compliance be taken into account, but it is important that they are all analyzed simultaneously with the aim of finding the right balance that satisfies them all. This is exactly the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimization, ICT support is used. This work is not intended to be a computer science or law thesis but an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not so much to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analyzed separately, which however have an impact on each other and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, a particular use case was chosen to show its application. The case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalized to their home. This provide the possibility to perform experiments using real hospital database

    Methods and tools for analysis and management of risks and regulatory compliance in the healthcare sector: the hospital at home – HaH

    Get PDF
    Changing or creating an organisation means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Managing the risks implies proposing changes to the processes that allow the desired result: an optimised process. In order to manage a company and optimise it in the best possible way, not only should the organisational aspect, risk management and legal compliance be taken into account, but it is important that they are all analysed simultaneously with the aim of finding the right balance that satisfies them all. This is the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimisation, ICT support is used. This work isn’t a thesis in computer science or law, but rather an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analysed separately, which however have an impact and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, the case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalised to their home. This provide the possibility to perform experiments using real hospital database

    Artificial Intelligence-enabled Automation for Compliance Checking against GDPR

    Get PDF
    Requirements engineering (RE) is concerned with eliciting legal requirements from applicable regulations to enable developing legally compliant software. Current software systems rely heavily on data, some of which can be confidential, personal, or sensitive. To address the growing concerns about data protection and privacy, the general data protection regulation (GDPR) has been introduced in the European Union (EU). Organizations, whether based in the EU or not, must comply with GDPR as long as they collect or process personal data of EU residents. Breaching GDPR can be charged with large fines reaching up to up to billions of euros. Privacy policies (PPs) and data processing agreements (DPAs) are documents regulated by GDPR to ensure, among other things, secure collection and processing of personal data. Such regulated documents can be used to elicit legal requirements that are inline with the organizations’ data protection policies. As a prerequisite to elicit a complete set of legal requirements, however, these documents must be compliant with GDPR. Checking the compliance of regulated documents entirely manually is a laborious and error-prone task. As we elaborate below, this dissertation investigates utilizing artificial intelligence (AI) technologies to provide automated support for compliance checking against GDPR. • AI-enabled Automation for Compliance Checking of PPs: PPs are technical documents stating the multiple privacy-related requirements that a system should satisfy in order to help individuals make informed decisions about sharing their personal data. We devise an automated solution that leverages natural language processing (NLP) and machine learning (ML), two sub-fields of AI, for checking the compliance of PPs against the applicable provisions in GDPR. Specifically, we create a comprehensive conceptual model capturing all information types pertinent to PPs and we further define a set of compliance criteria for the automated compliance checking of PPs. • NLP-based Automation for Compliance Checking of DPAs: DPAs are legally binding agreements between different organizations involved in the collection and processing of personal data to ensure that personal data remains protected. Using NLP semantic analysis technologies, we develop an automated solution that checks at phrasal-level the compliance of DPAs against GDPR. Our solution is able to provide not only a compliance assessment, but also detailed recommendations about avoiding GDPR violations. • ML-enabled Automation for Compliance Checking of DPAs: To understand how different representations of GDPR requirements and different enabling technologies fare against one another, we develop an automated solution that utilizes a combination of conceptual modeling and ML. We further empirically compare the resulting solution with our previously proposed solution, which uses natural language to represent GDPR requirements and leverages rules alongside NLP semantic analysis for the automated support

    Machine learning for managing structured and semi-structured data

    Get PDF
    As the digitalization of private, commercial, and public sectors advances rapidly, an increasing amount of data is becoming available. In order to gain insights or knowledge from these enormous amounts of raw data, a deep analysis is essential. The immense volume requires highly automated processes with minimal manual interaction. In recent years, machine learning methods have taken on a central role in this task. In addition to the individual data points, their interrelationships often play a decisive role, e.g. whether two patients are related to each other or whether they are treated by the same physician. Hence, relational learning is an important branch of research, which studies how to harness this explicitly available structural information between different data points. Recently, graph neural networks have gained importance. These can be considered an extension of convolutional neural networks from regular grids to general (irregular) graphs. Knowledge graphs play an essential role in representing facts about entities in a machine-readable way. While great efforts are made to store as many facts as possible in these graphs, they often remain incomplete, i.e., true facts are missing. Manual verification and expansion of the graphs is becoming increasingly difficult due to the large volume of data and must therefore be assisted or substituted by automated procedures which predict missing facts. The field of knowledge graph completion can be roughly divided into two categories: Link Prediction and Entity Alignment. In Link Prediction, machine learning models are trained to predict unknown facts between entities based on the known facts. Entity Alignment aims at identifying shared entities between graphs in order to link several such knowledge graphs based on some provided seed alignment pairs. In this thesis, we present important advances in the field of knowledge graph completion. For Entity Alignment, we show how to reduce the number of required seed alignments while maintaining performance by novel active learning techniques. We also discuss the power of textual features and show that graph-neural-network-based methods have difficulties with noisy alignment data. For Link Prediction, we demonstrate how to improve the prediction for unknown entities at training time by exploiting additional metadata on individual statements, often available in modern graphs. Supported with results from a large-scale experimental study, we present an analysis of the effect of individual components of machine learning models, e.g., the interaction function or loss criterion, on the task of link prediction. We also introduce a software library that simplifies the implementation and study of such components and makes them accessible to a wide research community, ranging from relational learning researchers to applied fields, such as life sciences. Finally, we propose a novel metric for evaluating ranking results, as used for both completion tasks. It allows for easier interpretation and comparison, especially in cases with different numbers of ranking candidates, as encountered in the de-facto standard evaluation protocols for both tasks.Mit der rasant fortschreitenden Digitalisierung des privaten, kommerziellen und öffentlichen Sektors werden immer größere Datenmengen verfügbar. Um aus diesen enormen Mengen an Rohdaten Erkenntnisse oder Wissen zu gewinnen, ist eine tiefgehende Analyse unerlässlich. Das immense Volumen erfordert hochautomatisierte Prozesse mit minimaler manueller Interaktion. In den letzten Jahren haben Methoden des maschinellen Lernens eine zentrale Rolle bei dieser Aufgabe eingenommen. Neben den einzelnen Datenpunkten spielen oft auch deren Zusammenhänge eine entscheidende Rolle, z.B. ob zwei Patienten miteinander verwandt sind oder ob sie vom selben Arzt behandelt werden. Daher ist das relationale Lernen ein wichtiger Forschungszweig, der untersucht, wie diese explizit verfügbaren strukturellen Informationen zwischen verschiedenen Datenpunkten nutzbar gemacht werden können. In letzter Zeit haben Graph Neural Networks an Bedeutung gewonnen. Diese können als eine Erweiterung von CNNs von regelmäßigen Gittern auf allgemeine (unregelmäßige) Graphen betrachtet werden. Wissensgraphen spielen eine wesentliche Rolle bei der Darstellung von Fakten über Entitäten in maschinenlesbaren Form. Obwohl große Anstrengungen unternommen werden, so viele Fakten wie möglich in diesen Graphen zu speichern, bleiben sie oft unvollständig, d. h. es fehlen Fakten. Die manuelle Überprüfung und Erweiterung der Graphen wird aufgrund der großen Datenmengen immer schwieriger und muss daher durch automatisierte Verfahren unterstützt oder ersetzt werden, die fehlende Fakten vorhersagen. Das Gebiet der Wissensgraphenvervollständigung lässt sich grob in zwei Kategorien einteilen: Link Prediction und Entity Alignment. Bei der Link Prediction werden maschinelle Lernmodelle trainiert, um unbekannte Fakten zwischen Entitäten auf der Grundlage der bekannten Fakten vorherzusagen. Entity Alignment zielt darauf ab, gemeinsame Entitäten zwischen Graphen zu identifizieren, um mehrere solcher Wissensgraphen auf der Grundlage einiger vorgegebener Paare zu verknüpfen. In dieser Arbeit stellen wir wichtige Fortschritte auf dem Gebiet der Vervollständigung von Wissensgraphen vor. Für das Entity Alignment zeigen wir, wie die Anzahl der benötigten Paare reduziert werden kann, während die Leistung durch neuartige aktive Lerntechniken erhalten bleibt. Wir erörtern auch die Leistungsfähigkeit von Textmerkmalen und zeigen, dass auf Graph-Neural-Networks basierende Methoden Schwierigkeiten mit verrauschten Paar-Daten haben. Für die Link Prediction demonstrieren wir, wie die Vorhersage für unbekannte Entitäten zur Trainingszeit verbessert werden kann, indem zusätzliche Metadaten zu einzelnen Aussagen genutzt werden, die oft in modernen Graphen verfügbar sind. Gestützt auf Ergebnisse einer groß angelegten experimentellen Studie präsentieren wir eine Analyse der Auswirkungen einzelner Komponenten von Modellen des maschinellen Lernens, z. B. der Interaktionsfunktion oder des Verlustkriteriums, auf die Aufgabe der Link Prediction. Außerdem stellen wir eine Softwarebibliothek vor, die die Implementierung und Untersuchung solcher Komponenten vereinfacht und sie einer breiten Forschungsgemeinschaft zugänglich macht, die von Forschern im Bereich des relationalen Lernens bis hin zu angewandten Bereichen wie den Biowissenschaften reicht. Schließlich schlagen wir eine neuartige Metrik für die Bewertung von Ranking-Ergebnissen vor, wie sie für beide Aufgaben verwendet wird. Sie ermöglicht eine einfachere Interpretation und einen leichteren Vergleich, insbesondere in Fällen mit einer unterschiedlichen Anzahl von Kandidaten, wie sie in den de-facto Standardbewertungsprotokollen für beide Aufgaben vorkommen
    corecore