9 research outputs found
GRADUATION: a GDPR-based mutation methodology
Adopting the General Data Protection Regulation (GDPR) enhances different business and research opportunities that evidence the necessity of appropriate solutions supporting specification, processing, testing, and assessing the overall (personal) data management. This paper proposes GRADUATION (GdpR-bAseD mUtATION) methodology for mutation analysis of data protection policies test cases. The new methodology provides generic mutation operators about the currently applicable EU Data Protection Regulation. The preliminary implementation of the steps involved in the GDPR-based mutant derivation is also described
Hybrid Refining Approach of PrOnto Ontology
This paper presents a refinement of PrOnto ontology using a validation test based on legal experts’ annotation of privacy policies combined with an Open Knowledge Extraction (OKE) algorithm. To ensure robustness of the results while preserving an interdisciplinary approach, the integration of legal and technical knowledge has been carried out as follows. The set of privacy policies was first analysed by the legal experts to discover legal concepts and map the text into PrOnto. The mapping was then provided to computer scientists to perform the OKE analysis. Results were validated by the legal experts, who provided feedbacks and refinements (i.e. new classes and modules) of the ontology according to MeLOn methodology. Three iterations were performed on a set of (development) policies, and a final test using a new set of privacy policies. The results are 75,43% of detection of concepts in the policy texts and an increase of roughly 33% in the accuracy gain on the test set, using the new refined version of PrOnto enriched with SKOS-XL lexicon terms and definitions
Towards an Enforceable GDPR Specification
While Privacy by Design (PbD) is prescribed by modern privacy regulations
such as the EU's GDPR, achieving PbD in real software systems is a notoriously
difficult task. One emerging technique to realize PbD is Runtime enforcement
(RE), in which an enforcer, loaded with a specification of a system's privacy
requirements, observes the actions performed by the system and instructs it to
perform actions that will ensure compliance with these requirements at all
times. To be able to use RE techniques for PbD, privacy regulations first need
to be translated into an enforceable specification. In this paper, we report on
our ongoing work in formalizing the GDPR. We first present a set of
requirements and an iterative methodology for creating enforceable formal
specifications of legal provisions. Then, we report on a preliminary case study
in which we used our methodology to derive an enforceable specification of part
of the GDPR. Our case study suggests that our methodology can be effectively
used to develop accurate enforceable specifications
No More Trade-Offs. GPT and Fully Informative Privacy Policies
The paper reports the results of an experiment aimed at testing to what
extent ChatGPT 3.5 and 4 is able to answer questions regarding privacy policies
designed in the new format that we propose. In a world of human-only
interpreters, there was a trade-off between comprehensiveness and
comprehensibility of privacy policies, leading to the actual policies not
containing enough information for users to learn anything meaningful. Having
shown that GPT performs relatively well with the new format, we provide
experimental evidence supporting our policy suggestion, namely that the law
should require fully comprehensive privacy policies, even if this means they
become less concise
A Data-Driven Analysis of Blockchain Systems' Public Online Communications on GDPR
After the European Union's new General Data Protection Regulation (GDPR) became applicable in May 2018,concerns about the legal compliance of public blockchain systems with rights guaranteed by GDPR have emerged, e.g., on the "right to be forgotten". In order to better understand how the blockchain sector sees the challenges raised by GDPR and how such their communications could influence their users, this paper reports our data-driven analysis of GDPR-related public online communications of blockchain developers and service providers.Our analysis covers 314 public blockchain systems, and two different online communication channels: legal documents including privacy policies, T&C (Terms and Conditions) documents and other similar legal documents published on systems’ official websites and public tweets of their official Twitter accounts.Our analysis revealed that only a minority (86/314≈27.5%)of the investigated blockchain systems had covered GDPR at least once using one or both communication channels. Among the 86systems, only 27 systems (8.6%) had at least one legal document that actually talks about GDPR for the corresponding blockchain system. We noticed a systematic lack of detail about why and how the GDPR compliance issue was addressed, and most systems made questionable statements about GDPR compliance. There sults are surprising considering that the GDPR was enacted in 2016 and has been in effect since May 2018
Data Quality Barriers for Transparency in Public Procurement
Governments need to be accountable and transparent for their public spending decisions in order to prevent losses through fraud and corruption as well as to build healthy and sustainable economies. Open data act as a major instrument in this respect by enabling public administrations, service providers, data journalists, transparency activists, and regular citizens to identify fraud or uncompetitive markets through connecting related, heterogeneous, and originally unconnected data sources. To this end, in this article, we present our experience in the case of Slovenia, where we successfully applied a number of anomaly detection techniques over a set of open disparate data sets integrated into a Knowledge Graph, including procurement, company, and spending data, through a linked data-based platform called TheyBuyForYou. We then report a set of guidelines for publishing high quality procurement data for better procurement analytics, since our experience has shown us that there are significant shortcomings in the quality of data being published. This article contributes to enhanced policy making by guiding public administrations at local, regional, and national levels on how to improve the way they publish and use procurement-related data; developing technologies and solutions that buyers in the public and private sectors can use and adapt to become more transparent, make markets more competitive, and reduce waste and fraud; and providing a Knowledge Graph, which is a data resource that is designed to facilitate integration across multiple data silos by showing how it adds context and domain knowledge to machine-learning-based procurement analytics.publishedVersio
Trust, Accountability, and Autonomy in Knowledge Graph-based AI for Self-determination
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering
intelligent decision-making and a wide range of Artificial Intelligence (AI)
services across major corporations such as Google, Walmart, and AirBnb. KGs
complement Machine Learning (ML) algorithms by providing data context and
semantics, thereby enabling further inference and question-answering
capabilities. The integration of KGs with neuronal learning (e.g., Large
Language Models (LLMs)) is currently a topic of active research, commonly named
neuro-symbolic AI. Despite the numerous benefits that can be accomplished with
KG-based AI, its growing ubiquity within online services may result in the loss
of self-determination for citizens as a fundamental societal issue. The more we
rely on these technologies, which are often centralised, the less citizens will
be able to determine their own destinies. To counter this threat, AI
regulation, such as the European Union (EU) AI Act, is being proposed in
certain regions. The regulation sets what technologists need to do, leading to
questions concerning: How can the output of AI systems be trusted? What is
needed to ensure that the data fuelling and the inner workings of these
artefacts are transparent? How can AI be made accountable for its
decision-making? This paper conceptualises the foundational topics and research
pillars to support KG-based AI for self-determination. Drawing upon this
conceptual framework, challenges and opportunities for citizen
self-determination are illustrated and analysed in a real-world scenario. As a
result, we propose a research agenda aimed at accomplishing the recommended
objectives
Challenges in using the actor model in software development, systematic literature review
Toimijamalli on hajautetun ja samanaikaisen laskennan malli, jossa pienet osat ohjelmistoa viestivät keskenään asynkronisesti ja käyttäjälle näkyvä toiminnallisuus on usean osan yhteistyöstä esiin nouseva ominaisuus. Nykypäivän ohjelmistojen täytyy kestää valtavia käyttäjämääriä ja sitä varten niiden täytyy pystyä nostamaan kapasiteettiaan nopeasti skaalautuakseen. Pienempiä ohjelmiston osia on helpompi lisätä kysynnän mukaan, joten toimijamalli vaikuttaa vastaavan tähän tarpeeseen. Toimijamallin käytössä voi kuitenkin esiintyä haasteita, joita tämä tutkimus pyrkii löytämään ja esittelemään. Tutkimus toteutetaan systemaattisena kirjallisuuskatsauksena toimijamalliin liittyvistä tutkimuksista.
Valituista tutkimuksista kerättiin tietoja, joiden pohjalta tutkimuskysymyksiin vastattiin. Tutkimustulokset listaavat ja kategorisoivat ohjelmistokehityksen ongelmia, joihin käytettiin toimijamallia, sekä erilaisia toimijamallin käytössä esiintyviä haasteita ja niiden ratkaisuita. Tutkimuksessa löydettiin toimijamallin käytössä esiintyviä haasteita ja näille haasteille luotiin uusi kategorisointi. Haasteiden juurisyitä analysoidessa havaittiin, että suuri osa toimijamallin haasteista johtuvat asynkronisen viestinnän käyttämisestä, ja että ohjelmoijan on oltava jatkuvasti tarkkana omista oletuksistaan viestijärjestyksestä. Haasteisiin esitetyt ratkaisut kategorisoitiin niihin liittyvän lisättävän koodin sijainnin mukaan
Eesti elektrooniline ID-kaart ja selle turvaväljakutsed
Eesti elektrooniline isikutunnistust (ID-kaart) on üle 18 aasta pakkunud turvalist elektroonilist identiteeti Eesti kodanikele. Avaliku võtme krüptograafia ja kaardile talletatud privaatvõti võimaldavad ID-kaardi omanikel juurde pääseda e-teenustele, anda juriidilist jõudu omavaid digiallkirju ning elektrooniliselt hääletada.
Käesolevas töös uuritakse põhjalikult Eesti ID-kaarti ning sellega seotud turvaväljakutseid. Me kirjeldame Eesti ID-kaarti ja selle ökosüsteemi, seotud osapooli ja protsesse, ID-kaardi elektroonilist baasfunktsionaalsust, seotud tehnilisi ja juriidilisi kontseptsioone ning muid seotud küsimusi. Me tutvustame kõiki kasutatud kiipkaardiplatforme ja nende abil väljastatud isikutunnistuste tüüpe. Iga platformi kohta esitame me detailse analüüsi kasutatava asümmeetrilise krüptograafia funktsionaalsusest ning kirjeldame ja analüüsime ID-kaardi kauguuendamise lahendusi. Lisaks esitame me süstemaatilise uurimuse ID-kaardiga seotud turvaintsidentidest ning muudest sarnastest probleemidest läbi aastate. Me kirjeldame probleemide tehnilist olemust, kasutatud leevendusmeetmeid ning kajastust ajakirjanduses. Käesoleva uurimustöö käigus avastati mitmeid varem teadmata olevaid turvaprobleeme ning teavitati nendest seotud osapooli.
Käesolev töö põhineb avalikult kättesaadaval dokumentatsioonil, kogutud ID-kaartide sertifikaatide andmebaasil, ajakirjandusel,otsesuhtlusel seotud osapooltega ning töö autori analüüsil ja eksperimentidel.For more than 18 years, the Estonian electronic identity card (ID card) has provided a secure electronic identity for Estonian residents. The public-key cryptography and private keys stored on the card enable Estonian ID card holders to access e-services, give legally binding digital signatures and even cast an i-vote in national elections.
This work provides a comprehensive study on the Estonian ID card and its security challenges. We introduce the Estonian ID card and its ecosystem by describing the involved parties and processes, the core electronic functionality of the ID card, related technical and legal concepts, and the related issues. We describe the ID card smart card chip platforms used over the years and the identity document types that have been issued using these platforms. We present a detailed analysis of the asymmetric cryptography functionality provided by each ID card platform and present a description and security analysis of the ID card remote update solutions that have been provided for each ID card platform. As yet another contribution of this work, we present a systematic study of security incidents and similar issues the Estonian ID card has experienced over the years. We describe the technical nature of the issue, mitigation measures applied and the reflections on the media. In the course of this research, several previously unknown security issues were discovered and reported to the involved parties.
The research has been based on publicly available documentation, collection of ID card certificates in circulation, information reflected in media, information from the involved parties, and our own analysis and experiments performed in the field.https://www.ester.ee/record=b541416