279 research outputs found

    A Computation of the Maximal Order Type of the Term Ordering on Finite Multisets

    Get PDF
    We give a sharpening of a recent result of Aschenbrenner and Pong about the maximal order type of the term ordering on the finite multisets over a wpo. Moreover we discuss an approach to compute maximal order types of well-partial orders which are related to tree embeddings

    Privacy Architectures: Reasoning About Data Minimisation and Integrity

    Get PDF
    Privacy by design will become a legal obligation in the European Community if the Data Protection Regulation eventually gets adopted. However, taking into account privacy requirements in the design of a system is a challenging task. We propose an approach based on the specification of privacy architectures and focus on a key aspect of privacy, data minimisation, and its tension with integrity requirements. We illustrate our formal framework through a smart metering case study.Comment: appears in STM - 10th International Workshop on Security and Trust Management 8743 (2014

    Privacy by Design: From Technologies to Architectures (Position Paper)

    Get PDF
    Existing work on privacy by design mostly focus on technologies rather than methodologies and on components rather than architectures. In this paper, we advocate the idea that privacy by design should also be addressed at the architectural level and be associated with suitable methodologies. Among other benefits, architectural descriptions enable a more systematic exploration of the design space. In addition, because privacy is intrinsically a complex notion that can be in tension with other requirements, we believe that formal methods should play a key role in this area. After presenting our position, we provide some hints on how our approach can turn into practice based on ongoing work on a privacy by design environment

    Monte Carlo Methods for Estimating Interfacial Free Energies and Line Tensions

    Full text link
    Excess contributions to the free energy due to interfaces occur for many problems encountered in the statistical physics of condensed matter when coexistence between different phases is possible (e.g. wetting phenomena, nucleation, crystal growth, etc.). This article reviews two methods to estimate both interfacial free energies and line tensions by Monte Carlo simulations of simple models, (e.g. the Ising model, a symmetrical binary Lennard-Jones fluid exhibiting a miscibility gap, and a simple Lennard-Jones fluid). One method is based on thermodynamic integration. This method is useful to study flat and inclined interfaces for Ising lattices, allowing also the estimation of line tensions of three-phase contact lines, when the interfaces meet walls (where "surface fields" may act). A generalization to off-lattice systems is described as well. The second method is based on the sampling of the order parameter distribution of the system throughout the two-phase coexistence region of the model. Both the interface free energies of flat interfaces and of (spherical or cylindrical) droplets (or bubbles) can be estimated, including also systems with walls, where sphere-cap shaped wall-attached droplets occur. The curvature-dependence of the interfacial free energy is discussed, and estimates for the line tensions are compared to results from the thermodynamic integration method. Basic limitations of all these methods are critically discussed, and an outlook on other approaches is given

    The COMPASS Experiment at CERN

    Get PDF
    The COMPASS experiment makes use of the CERN SPS high-intensitymuon and hadron beams for the investigation of the nucleon spin structure and the spectroscopy of hadrons. One or more outgoing particles are detected in coincidence with the incoming muon or hadron. A large polarized target inside a superconducting solenoid is used for the measurements with the muon beam. Outgoing particles are detected by a two-stage, large angle and large momentum range spectrometer. The setup is built using several types of tracking detectors, according to the expected incident rate, required space resolution and the solid angle to be covered. Particle identification is achieved using a RICH counter and both hadron and electromagnetic calorimeters. The setup has been successfully operated from 2002 onwards using a muon beam. Data with a hadron beam were also collected in 2004. This article describes the main features and performances of the spectrometer in 2004; a short summary of the 2006 upgrade is also given.Comment: 84 papes, 74 figure

    Search for supersymmetry with a dominant R-parity violating LQDbar couplings in e+e- collisions at centre-of-mass energies of 130GeV to 172 GeV

    Full text link
    A search for pair-production of supersymmetric particles under the assumption that R-parity is violated via a dominant LQDbar coupling has been performed using the data collected by ALEPH at centre-of-mass energies of 130-172 GeV. The observed candidate events in the data are in agreement with the Standard Model expectation. This result is translated into lower limits on the masses of charginos, neutralinos, sleptons, sneutrinos and squarks. For instance, for m_0=500 GeV/c^2 and tan(beta)=sqrt(2) charginos with masses smaller than 81 GeV/c^2 and neutralinos with masses smaller than 29 GeV/c^2 are excluded at the 95% confidence level for any generation structure of the LQDbar coupling.Comment: 32 pages, 30 figure

    Graph Neural Networks for low-energy event classification & reconstruction in IceCube

    Get PDF
    IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeV–100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, compared to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%–20% compared to current maximum likelihood techniques in the energy range of 1 GeV–30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events.Peer Reviewe

    Search for Bs0B^{0}_{s} oscillations using inclusive lepton events

    Get PDF
    A search for Bs oscillations is performed using a sample of semileptonic b-hadron decays collected by the ALEPH experiment during 1991-1995. Compared to previous inclusive lepton analyses, the prop er time resolution and b-flavour mistag rate are significantly improved. Additional sensitivity to Bs mixing is obtained by identifying subsamples of events having a Bs purity which is higher than the average for the whole data sample. Unbinned maximum likelihood amplitude fits are performed to derive a lower limit of Dms>9.5 ps-1 at 95% CL. Combining with the ALEPH Ds based analyses yields Dms>9.6 ps-1 at 95% CL.A search for B0s oscillations is performed using a sample of semileptonic b-hadron decays collected by the ALEPH experiment during 1991-1995. Compared to previous inclusive lepton analyses, the proper time resolution and b-flavour mistag rate are significantly improved. Additional sensitivity to B0s mixing is obtained by identifying subsamples of events having a B0s purity which is higher than the average for the whole data sample. Unbinned maximum likelihood amplitude fits are performed to derive a lower limit of Deltam_s>9.5ps^-1 at 95% CL. Combining with the ALEPH D-s based analyses yields Deltam_s>9.6ps^-1 at 95% CL

    Search for excited leptons at 130-140 GeV

    Get PDF

    Search for supersymmetry in the photon(s) plus missing energy channels at s\sqrt{s}=161 GeV and 172 GeV

    No full text
    Searches for supersymmetric particles in channels with one or more photons and missing energy have been performed with data collected by the ALEPH detector at LEP. The data consist of 11.1 \pb\ at s=161  GeV\sqrt{s} = 161 ~\, \rm GeV, 1.1 \pb\ at 170 \gev\ and 9.5 \pb\ at 172 GeV. The \eenunu\ cross se ction is measured. The data are in good agreement with predictions based on the Standard Model, and are used to set upper limits on the cross sections for anomalous photon production. These limits are compared to two different SUSY models and used to set limits on the neutralino mass. A limit of 71 \gevsq\ at 95\% C.L. is set on the mass of the lightest neutralin o (τχ10≤\tau_{\chi_{1}^{0}} \leq 3 ns) for the gauge-mediated supersymmetry breaking and LNZ models
    • …
    corecore