138,457 research outputs found

    Enhancing Confidentiality and Privacy Preservation in e-Health to Enhanced Security

    Get PDF
    Electronic health (e-health) system use is growing, which has improved healthcare services significantly but has created questions about the privacy and security of sensitive medical data. This research suggests a novel strategy to overcome these difficulties and strengthen the security of e-health systems while maintaining the privacy and confidentiality of patient data by utilising machine learning techniques. The security layers of e-health systems are strengthened by the comprehensive framework we propose in this paper, which incorporates cutting-edge machine learning algorithms. The suggested framework includes data encryption, access control, and anomaly detection as its three main elements. First, to prevent unauthorised access during transmission and storage, patient data is secured using cutting-edge encryption technologies. Second, to make sure that only authorised staff can access sensitive medical records, access control mechanisms are strengthened using machine learning models that examine user behaviour patterns. This research's inclusion of machine learning-based anomaly detection is its most inventive feature. The technology may identify variations from typical data access and usage patterns, thereby quickly spotting potential security breaches or unauthorised activity, by training models on past e-health data. This proactive strategy improves the system's capacity to successfully address new threats. Extensive experiments were carried out employing a broad dataset made up of real-world e-health scenarios to verify the efficacy of the suggested approach. The findings showed a marked improvement in the protection of confidentiality and privacy, along with a considerable decline in security breaches and unauthorised access events

    Practices, policies, and problems in the management of learning data: A survey of libraries’ use of digital learning objects and the data they create

    Get PDF
    This study analyzed libraries’ management of the data generated by library digital learning objects (DLO’s) such as forms, surveys, quizzes, and tutorials. A substantial proportion of respondents reported having a policy relevant to learning data, typically a campus-level policy, but most did not. Other problems included a lack of access to library learning data, concerns about student privacy, inadequate granularity or standardization, and a lack of knowledge about colleagues’ practices. We propose more dialogue on learning data within libraries, between libraries and administrators, and across the library profession

    Algorithms that Remember: Model Inversion Attacks and Data Protection Law

    Get PDF
    Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around `model inversion' and `membership inference' attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.Comment: 15 pages, 1 figur

    ePortfolios: Mediating the minefield of inherent risks and tensions

    Get PDF
    The ePortfolio Project at the Queensland University of Technology (QUT) exemplifies an innovative and flexible harnessing of current portfolio thinking and design that has achieved substantial buy-in across the institution with over 23000 active portfolios. Robust infrastructure support, curriculum integration and training have facilitated widespread take-up, while QUT’s early adoption of ePortfolio technology has enabled the concomitant development of a strong policy and systems approach to deal explicitly with legal and design responsibilities. In the light of that experience, this paper will highlight the risks and tensions inherent in ePortfolio policy, design and implementation. In many ways, both the strengths and weaknesses of ePortfolios lie in their ability to be accessed by a wider, less secure audience – either internally (e.g. other students and staff) or externally (e.g. potential employees and referees). How do we balance the obvious requirement to safeguard students from the potential for institutionally-facilitated cyber-harm and privacy breaches, with this generation’s instinctive personal and professional desires for reflections, private details, information and intellectual property to be available freely and with minimal restriction? How can we promote collaboration and freeform expression in the blog and wiki world but also manage the institutional risk that unauthorised use of student information and work so palpably carries with it? For ePortfolios to flourish and to develop and for students to remain engaged in current reflective processes, holistic guidelines and sensible boundaries are required to help safeguard personal details and journaling without overly restricting students’ emotional, collaborative and creative engagement with the ePortfolio experience. This paper will discuss such issues and suggest possible ways forward

    Glimmers: Resolving the Privacy/Trust Quagmire

    Full text link
    Many successful services rely on trustworthy contributions from users. To establish that trust, such services often require access to privacy-sensitive information from users, thus creating a conflict between privacy and trust. Although it is likely impractical to expect both absolute privacy and trustworthiness at the same time, we argue that the current state of things, where individual privacy is usually sacrificed at the altar of trustworthy services, can be improved with a pragmatic GlimmerGlimmer ofof TrustTrust, which allows services to validate user contributions in a trustworthy way without forfeiting user privacy. We describe how trustworthy hardware such as Intel's SGX can be used client-side -- in contrast to much recent work exploring SGX in cloud services -- to realize the Glimmer architecture, and demonstrate how this realization is able to resolve the tension between privacy and trust in a variety of cases
    • …
    corecore