5,899 research outputs found
A qualitative study of stakeholders' perspectives on the social network service environment
Over two billion people are using the Internet at present, assisted by the mediating activities of software agents which deal with the diversity and complexity of information. There are, however, ethical issues due to the monitoring-and-surveillance, data mining and autonomous nature of software agents. Considering the context, this study aims to comprehend stakeholders' perspectives on the social network service environment in order to identify the main considerations for the design of software agents in social network services in the near future. Twenty-one stakeholders, belonging to three key stakeholder groups, were recruited using a purposive sampling strategy for unstandardised semi-structured e-mail interviews. The interview data were analysed using a qualitative content analysis method. It was possible to identify three main considerations for the design of software agents in social network services, which were classified into the following categories: comprehensive understanding of users' perception of privacy, user type recognition algorithms for software agent development and existing software agents enhancement
Big data for monitoring educational systems
This report considers “how advances in big data are likely to transform the context and methodology of monitoring educational systems within a long-term perspective (10-30 years) and impact the evidence based policy development in the sector”, big data are “large amounts of different types of data produced with high velocity from a high number of various types of sources.” Five independent experts were commissioned by Ecorys, responding to themes of: students' privacy, educational equity and efficiency, student tracking, assessment and skills. The experts were asked to consider the “macro perspective on governance on educational systems at all levels from primary, secondary education and tertiary – the latter covering all aspects of tertiary from further, to higher, and to VET”, prioritising primary and secondary levels of education
Outsourced Analysis of Encrypted Graphs in the Cloud with Privacy Protection
Huge diagrams have unique properties for organizations and research, such as
client linkages in informal organizations and customer evaluation lattices in
social channels. They necessitate a lot of financial assets to maintain because
they are large and frequently continue to expand. Owners of large diagrams may
need to use cloud resources due to the extensive arrangement of open cloud
resources to increase capacity and computation flexibility. However, the
cloud's accountability and protection of schematics have become a significant
issue. In this study, we consider calculations for security savings for
essential graph examination practices: schematic extraterrestrial examination
for outsourcing graphs in the cloud server. We create the security-protecting
variants of the two proposed Eigen decay computations. They are using two
cryptographic algorithms: additional substance homomorphic encryption (ASHE)
strategies and some degree homomorphic encryption (SDHE) methods. Inadequate
networks also feature a distinctively confidential info adaptation convention
to allow the trade-off between secrecy and data sparseness. Both dense and
sparse structures are investigated. According to test results, calculations
with sparse encoding can drastically reduce information. SDHE-based strategies
have reduced computing time, while ASHE-based methods have reduced stockpiling
expenses
RANDOMIZATION BASED PRIVACY PRESERVING CATEGORICAL DATA ANALYSIS
The success of data mining relies on the availability of high quality data. To ensure quality data mining, effective information sharing between organizations becomes a vital requirement in today’s society. Since data mining often involves sensitive infor- mation of individuals, the public has expressed a deep concern about their privacy. Privacy-preserving data mining is a study of eliminating privacy threats while, at the same time, preserving useful information in the released data for data mining.
This dissertation investigates data utility and privacy of randomization-based mod- els in privacy preserving data mining for categorical data. For the analysis of data utility in randomization model, we first investigate the accuracy analysis for associ- ation rule mining in market basket data. Then we propose a general framework to conduct theoretical analysis on how the randomization process affects the accuracy of various measures adopted in categorical data analysis.
We also examine data utility when randomization mechanisms are not provided to data miners to achieve better privacy. We investigate how various objective associ- ation measures between two variables may be affected by randomization. We then extend it to multiple variables by examining the feasibility of hierarchical loglinear modeling. Our results provide a reference to data miners about what they can do and what they can not do with certainty upon randomized data directly without the knowledge about the original distribution of data and distortion information.
Data privacy and data utility are commonly considered as a pair of conflicting re- quirements in privacy preserving data mining applications. In this dissertation, we investigate privacy issues in randomization models. In particular, we focus on the attribute disclosure under linking attack in data publishing. We propose efficient so- lutions to determine optimal distortion parameters such that we can maximize utility preservation while still satisfying privacy requirements. We compare our randomiza- tion approach with l-diversity and anatomy in terms of utility preservation (under the same privacy requirements) from three aspects (reconstructed distributions, accuracy of answering queries, and preservation of correlations). Our empirical results show that randomization incurs significantly smaller utility loss
Adoption of precision medicine: limitations and considerations
Research is ongoing all over the world for identifying the barriers and finding effective
solutions to accelerate the projection of Precision Medicine (PM) in the healthcare industry. Yet
there has not been a valid and practical model to tackle the several challenges that have slowed
down the widespread of this clinical practice. This study aimed to highlight the major
limitations and considerations for implementing Precision Medicine. The two theories Diffusion
of Innovation and Socio-Technical are employed to discuss the success indicators of PM
adoption. Throughout the theoretical assessment, two key theoretical gaps are identified and
related findings are discussed.FCT – Fundação para a Ciência e Tecnologia within the Projects Scope:
DSAIPA/DS/0084/201
Recent Advances of Differential Privacy in Centralized Deep Learning: A Systematic Survey
Differential Privacy has become a widely popular method for data protection
in machine learning, especially since it allows formulating strict mathematical
privacy guarantees. This survey provides an overview of the state-of-the-art of
differentially private centralized deep learning, thorough analyses of recent
advances and open problems, as well as a discussion of potential future
developments in the field. Based on a systematic literature review, the
following topics are addressed: auditing and evaluation methods for private
models, improvements of privacy-utility trade-offs, protection against a broad
range of threats and attacks, differentially private generative models, and
emerging application domains.Comment: 35 pages, 2 figure
- …