6,303 research outputs found
PriPeARL: A Framework for Privacy-Preserving Analytics and Reporting at LinkedIn
Preserving privacy of users is a key requirement of web-scale analytics and
reporting applications, and has witnessed a renewed focus in light of recent
data breaches and new regulations such as GDPR. We focus on the problem of
computing robust, reliable analytics in a privacy-preserving manner, while
satisfying product requirements. We present PriPeARL, a framework for
privacy-preserving analytics and reporting, inspired by differential privacy.
We describe the overall design and architecture, and the key modeling
components, focusing on the unique challenges associated with privacy,
coverage, utility, and consistency. We perform an experimental study in the
context of ads analytics and reporting at LinkedIn, thereby demonstrating the
tradeoffs between privacy and utility needs, and the applicability of
privacy-preserving mechanisms to real-world data. We also highlight the lessons
learned from the production deployment of our system at LinkedIn.Comment: Conference information: ACM International Conference on Information
and Knowledge Management (CIKM 2018
Recommended from our members
Kafka, Samza and the Unix Philosophy of Distributed Data
Apache Kafka is a scalable message broker, and Apache Samza is a stream processing framework built upon Kafka. They are widely used as infrastructure for implementing personalized online services and real-time predictive analytics. Besides providing high throughput and low latency, Kafka and Samza are designed with operational robustness and long-term maintenance of applications in mind. In this paper we explain the reasoning behind the design of Kafka and Samza, which allow complex applications to be built by composing a small number of simple primitives – replicated logs and stream operators. We draw parallels between the design of Kafka and Samza, batch processing pipelines, database architecture, and the design philosophy of Unix
Magnetic field generated resistivity maximum in graphite
In zero magnetic field, B, the electrical resistivity, rho(O,T) of highly oriented pyrolytic (polycrystalline) graphite drops smoothly with decreasing T, becoming constant below 4 K. However, in a fixed applied magnetic field B, the resistivity rho(B,T) goes through a maximum as a function of T, with larger maximum for larger B. The temperature of the maximum increases with B, but saturates to a constant value near 25 K (exact T depends on sample) at high B. In single crystal graphite a maximum in rho(B,T) as a function of T is also present, but has the effects of Landau level quantization superimposed. Several possible explanations for the rho(B,T) maximum are proposed, but a complete explanation awaits detailed calculations involving the energy band structure of graphite, and the particular scattering mechanisms involved
Understanding digital events : process philosophy and causal autonomy
This paper argues that the ubiquitous digital
networks in which we are increasingly becoming
immersed present a threat to our ability to exercise
free will. Using process philosophy, and expanding
upon understandings of causal autonomy, the paper
outlines a thematic analysis of diary studies and
interviews gathered in a project exploring the nature of
digital experience. It concludes that without
mindfulness in both the use and design of digital
devices and services we run the risk of allowing such
services to direct our daily lives in ways over which we
are increasingly losing control
Incentive Compatibility and Differentiability New Results and Classic Applications
We provide several generalizations of Mailath's (1987) result that in games of asymmetric information with a continuum of types incentive compatibility plus separation implies differentiability of the informed agent's strategy. The new results extend the theory to classic models in finance such as Leland and Pyle (1977), Glosten (1989), and DeMarzo and Duffie (1999), that were not previously covered
A model-independent Dalitz plot analysis of B±→DK± with D→K0Sh+h− (h=π,K) decays and constraints on the CKM angle γ
A binned Dalitz plot analysis of B ±→DK ± decays, with D→KS0π+π- and D→KS0K+K-, is performed to measure the CP-violating observables x ± and y ± which are sensitive to the CKM angle γ. The analysis exploits 1.0 fb -1 of data collected by the LHCb experiment. The study makes no model-based assumption on the variation of the strong phase of the D decay amplitude over the Dalitz plot, but uses measurements of this quantity from CLEO-c as input. The values of the parameters are found to be x -=(0.0±4.3±1.5±0.6)×10 -2, y -=(2.7±5.2±0.8±2.3)×10 -2, x +=(-10.3±4.5±1.8±1.4)×10 -2 and y +=(-0.9±3.7±0.8±3.0)×10 -2. The first, second, and third uncertainties are the statistical, the experimental systematic, and the error associated with the precision of the strong-phase parameters measured at CLEO-c, respectively. These results correspond to γ=(44-38+43)°, with a second solution at γ→γ+180°, and r B=0.07±0.04, where r B is the ratio between the suppressed and favoured B decay amplitudes
Absolute luminosity measurements with the LHCb detector at the LHC
Absolute luminosity measurements are of general interest for colliding-beam experiments at storage rings. These measurements are necessary to determine the absolute cross-sections of reaction processes and are valuable to quantify the performance of the accelerator. Using data taken in 2010, LHCb has applied two methods to determine the absolute scale of its luminosity measurements for proton-proton collisions at the LHC with a centre-of-mass energy of 7 TeV. In addition to the classic ''van der Meer scan'' method a novel technique has been developed which makes use of direct imaging of the individual beams using beam-gas and beam-beam interactions. This beam imaging method is made possible by the high resolution of the LHCb vertex detector and the close proximity of the detector to the beams, and allows beam parameters such as positions, angles and widths to be determined. The results of the two methods have comparable precision and are in good agreement. Combining the two methods, an overal precision of 3.5% in the absolute luminosity determination is reached. The techniques used to transport the absolute luminosity calibration to the full 2010 data-taking period are presented
- …