289 research outputs found
Recommended from our members
Big Data Assurance Evaluation: An SLA-Based Approach.
The Big Data community has started noticing that there is the need to complete Big Data platforms with assurance techniques proving the correct behavior of Big Data
analytics and management. In this paper, we propose a Big Data assurance solution based on Service-Level Agreements (SLAs), focusing on a platform providing Model-based Big Data Analytics-as-a-Service (MBDAaaS)
Machine-Readable Privacy Certificates for Services
Privacy-aware processing of personal data on the web of services requires
managing a number of issues arising both from the technical and the legal
domain. Several approaches have been proposed to matching privacy requirements
(on the clients side) and privacy guarantees (on the service provider side).
Still, the assurance of effective data protection (when possible) relies on
substantial human effort and exposes organizations to significant
(non-)compliance risks. In this paper we put forward the idea that a privacy
certification scheme producing and managing machine-readable artifacts in the
form of privacy certificates can play an important role towards the solution of
this problem. Digital privacy certificates represent the reasons why a privacy
property holds for a service and describe the privacy measures supporting it.
Also, privacy certificates can be used to automatically select services whose
certificates match the client policies (privacy requirements).
Our proposal relies on an evolution of the conceptual model developed in the
Assert4Soa project and on a certificate format specifically tailored to
represent privacy properties. To validate our approach, we present a worked-out
instance showing how privacy property Retention-based unlinkability can be
certified for a banking financial service.Comment: 20 pages, 6 figure
Broadening the scope of Differential Privacy Using Metrics ⋆
Abstract. Differential Privacy is one of the most prominent frameworks used to deal with disclosure prevention in statistical databases. It provides a formal privacy guarantee, ensuring that sensitive information relative to individuals cannot be easily inferred by disclosing answers to aggregate queries. If two databases are adjacent, i.e. differ only for an individual, then the query should not allow to tell them apart by more than a certain factor. This induces a bound also on the distinguishability of two generic databases, which is determined by their distance on the Hamming graph of the adjacency relation. In this paper we explore the implications of differential privacy when the indistinguishability requirement depends on an arbitrary notion of distance. We show that we can naturally express, in this way, (protection against) privacy threats that cannot be represented with the standard notion, leading to new applications of the differential privacy framework. We give intuitive characterizations of these threats in terms of Bayesian adversaries, which generalize two interpretations of (standard) differential privacy from the literature. We revisit the well-known results stating that universally optimal mechanisms exist only for counting queries: We show that, in our extended setting, universally optimal mechanisms exist for other queries too, notably sum, average, and percentile queries. We explore various applications of the generalized definition, for statistical databases as well as for other areas, such that geolocation and smart metering.
Weight filtration on the cohomology of complex analytic spaces
We extend Deligne's weight filtration to the integer cohomology of complex
analytic spaces (endowed with an equivalence class of compactifications). In
general, the weight filtration that we obtain is not part of a mixed Hodge
structure. Our purely geometric proof is based on cubical descent for
resolution of singularities and Poincar\'e-Verdier duality. Using similar
techniques, we introduce the singularity filtration on the cohomology of
compactificable analytic spaces. This is a new and natural analytic invariant
which does not depend on the equivalence class of compactifications and is
related to the weight filtration.Comment: examples added + minor correction
Recommended from our members
Towards Transparent and Trustworthy Cloud
Despite its immense benefits in terms of flexibility, resource consumption, and simplified management, cloud computing raises several concerns due to lack of trust and transparency. Like all computing paradigms based on outsourcing, the use of cloud computing is largely a matter of trust. There is an increasing pressure by cloud customers for solutions that would increase their confidence that a cloud service/application is behaving in a secure and correct manner. Cloud assurance techniques, developed to assess the trustworthiness of cloud services, can play a major role in building trust. In this paper, we start from the assumption that an opaque cloud does not fit security, and present a reliable evidence collection process and infrastructure extending existing assurance techniques towards the definition of a trustworthy cloud. The proposed process and infrastructure are applied to a case study on cloud certification showing their utility
Trustworthy IoT: An evidence collection approach based on smart contracts
Today, Internet of Things (IoT) implements an ecosystem where a panoply of interconnected devices collect data from physical environments and supply them to processing services, on top of which cloud-based applications are built and provided to mobile end users. The undebatable advantages of smart IoT systems clash with the need of a secure and trustworthy environment. In this paper, we propose a service-based methodology based on blockchain and smart contracts for trustworthy evidence collection at the basis of a trustworthy IoT assurance evaluation. The methodology balances the provided level of trustworthiness and its performance, and is experimentally evaluated using Hyperledger fabric blockchain
Recommended from our members
ASSERT4SOA: Toward Security Certification of Service-Oriented Applications
ASSERT4SOA project proposes machine readable certificates to be used to allow Web service requesters to automatically assess the security properties of Web services (and their providers) as certified by a trusted third party. This vision promises to open up an entire new market for certification services
Generalized differential privacy: regions of priors that admit robust optimal mechanisms
International audienceDifferential privacy is a notion of privacy that was initially designed for statistical databases, and has been recently extended to a more general class of domains. Both differential privacy and its generalized version can be achieved by adding random noise to the reported data. Thus, privacy is obtained at the cost of reducing the data's accuracy, and therefore their utility. In this paper we consider the problem of identifying optimal mechanisms for gen- eralized differential privacy, i.e. mechanisms that maximize the utility for a given level of privacy. The utility usually depends on a prior distribution of the data, and naturally it would be desirable to design mechanisms that are universally optimal, i.e., optimal for all priors. However it is already known that such mechanisms do not exist in general. We then characterize maximal classes of priors for which a mechanism which is optimal for all the priors of the class does exist. We show that such classes can be defined as convex polytopes in the priors space. As an application, we consider the problem of privacy that arises when using, for instance, location-based services, and we show how to define mechanisms that maximize the quality of service while preserving the desired level of geo- indistinguishability
Privacy-aware Big Data Analytics as a Service for Public Health Policies in Smart Cities
Smart cities make use of a variety of technologies, protocols, and devices to support and improve the quality of everyday activities of their inhabitants. An important aspect for the development of smart cities are innovative public policies, represented by requirements, actions, and plans aimed at reaching a specific goal for improving the society's welfare. With the advent of Big Data, the definition of such policies could be improved and reach an unprecedented effectiveness on several dimensions, e.g. social or economic. On the other hand, however, the safeguard of the privacy of its citizens is part of the quality of life of a smart city. In this paper, we focus on balancing quality of life and privacy protection in smart cities by providing a new Big Data-assisted public policy making process implementing privacy-by-design. The proposed approach is based on a Big Data Analytics as a Service approach, which is driven by a Privacy Compliance Assessment derived from the European Union's GDPR, and discussed in the context of a public health policy making process
- …