43 research outputs found
Bayesian statistics and modelling
Bayesian statistics is an approach to data analysis based on Bayesâ theorem, where available knowledge about parameters in a statistical model is updated with the information in observed data. The background knowledge is expressed as a prior distribution and combined with observational data in the form of a likelihood function to determine the posterior distribution. The posterior can also be used for making predictions about future events. This Primer describes the stages involved in Bayesian analysis, from specifying the prior and data models to deriving inference, model checking and refinement. We discuss the importance of prior and posterior predictive checking, selecting a proper technique for sampling from a posterior distribution, variational inference and variable selection. Examples of successful applications of Bayesian analysis across various research fields are provided, including in social sciences, ecology, genetics, medicine and more. We propose strategies for reproducibility and reporting standards, outlining an updated WAMBS (when to Worry and how to Avoid the Misuse of Bayesian Statistics) checklist. Finally, we outline the impact of Bayesian analysis on artificial intelligence, a major goal in the next decade
First 230 GHz VLBI Fringes on 3C 279 using the APEX Telescope
We report about a 230 GHz very long baseline interferometry (VLBI) fringe
finder observation of blazar 3C 279 with the APEX telescope in Chile, the
phased submillimeter array (SMA), and the SMT of the Arizona Radio Observatory
(ARO). We installed VLBI equipment and measured the APEX station position to 1
cm accuracy (1 sigma). We then observed 3C 279 on 2012 May 7 in a 5 hour 230
GHz VLBI track with baseline lengths of 2800 M to 7200 M and
a finest fringe spacing of 28.6 micro-arcseconds. Fringes were detected on all
baselines with SNRs of 12 to 55 in 420 s. The correlated flux density on the
longest baseline was ~0.3 Jy/beam, out of a total flux density of 19.8 Jy.
Visibility data suggest an emission region <38 uas in size, and at least two
components, possibly polarized. We find a lower limit of the brightness
temperature of the inner jet region of about 10^10 K. Lastly, we find an upper
limit of 20% on the linear polarization fraction at a fringe spacing of ~38
uas. With APEX the angular resolution of 230 GHz VLBI improves to 28.6 uas.
This allows one to resolve the last-photon ring around the Galactic Center
black hole event horizon, expected to be 40 uas in diameter, and probe radio
jet launching at unprecedented resolution, down to a few gravitational radii in
galaxies like M 87. To probe the structure in the inner parsecs of 3C 279 in
detail, follow-up observations with APEX and five other mm-VLBI stations have
been conducted (March 2013) and are being analyzed.Comment: accepted for publication in A&
Divisible E-Cash from Constrained Pseudo-Random Functions
International audienceElectronic cash (e-cash) is the digital analogue of regular cash which aims at preservingusersâ privacy. Following Chaumâs seminal work, several new features were proposed for e-cash toaddress the practical issues of the original primitive. Among them,divisibilityhas proved very usefulto enable efficient storage and spendings. Unfortunately, it is also very difficult to achieve and, todate, quite a few constructions exist, all of them relying on complex mechanisms that can only beinstantiated in one specific setting. In addition security models are incomplete and proofs sometimeshand-wavy.In this work, we first provide a complete security model for divisible e-cash, and we study the linkswith constrained pseudo-random functions (PRFs), a primitive recently formalized by Boneh andWaters. We exhibit two frameworks of divisible e-cash systems from constrained PRFs achievingsome specific properties: either key homomorphism or delegability. We then formally prove theseframeworks, and address two main issues in previous constructions: two essential security notionswere either not considered at all or not fully proven. Indeed, we introduce the notion ofclearing,which should guarantee that only the recipient of a transaction should be able to do the deposit,and we show theexculpability, that should prevent an honest user to be falsely accused, was wrongin most proofs of the previous constructions. Some can easily be repaired, but this is not the casefor most complex settings such as constructions in the standard model. Consequently, we providethe first construction secure in the standard model, as a direct instantiation of our framework
The OPS-SAT case: A data-centric competition for onboard satellite image classification
OnlinePublWhile novel artificial intelligence and machine learning techniques are evolving and disrupting established terrestrial technologies at an unprecedented speed, their adaptation onboard satellites is seemingly lagging. A major hindrance in this regard is the need for high-quality annotated data for training such systems, which makes the development process of machine learning solutions costly, time-consuming, and inefficient. This paper presents âthe OPS-SAT caseâ, a novel data-centric competition that seeks to address these challenges. The powerful computational capabilities of the European Space Agencyâs OPS-SAT satellite are utilized to showcase the design of machine learning systems for space by using only the small amount of available labeled data, relying on the widely adopted and freely available open-source software. The generation of a suitable dataset, design and evaluation of a public data-centric competition, and results of an onboard experimental campaign by using the competition winnersâ machine learning model directly on OPS-SAT are detailed. The results indicate that adoption of open standards and deployment of advanced data augmentation techniques can retrieve meaningful onboard results comparatively quickly, simplifying and expediting an otherwise prolonged development period.Gabriele Meoni, Marcus MĂ€rtens, Dawa Derksen, Kenneth See, Toby Lightheart, Anthony SĂ©cher, Arnaud Martin, David Rijlaarsdam, Vincenzo Fanizza, and Dario Izz
Bayesian statistics and modelling
Bayesian statistics is an approach to data analysis based on Bayesâ theorem, where available knowledge about parameters in a statistical model is updated with the information in observed data. The background knowledge is expressed as a prior distribution and combined with observational data in the form of a likelihood function to determine the posterior distribution. The posterior can also be used for making predictions about future events. This Primer describes the stages involved in Bayesian analysis, from specifying the prior and data models to deriving inference, model checking and refinement. We discuss the importance of prior and posterior predictive checking, selecting a proper technique for sampling from a posterior distribution, variational inference and variable selection. Examples of successful applications of Bayesian analysis across various research fields are provided, including in social sciences, ecology, genetics, medicine and more. We propose strategies for reproducibility and reporting standards, outlining an updated WAMBS (when to Worry and how to Avoid the Misuse of Bayesian Statistics) checklist. Finally, we outline the impact of Bayesian analysis on artificial intelligence, a major goal in the next decade