2,721 research outputs found

    Development of European standards for evaluative reporting in forensic science : The gap between intentions and perceptions

    Get PDF
    Criminal justice authorities of EU countries currently engage in dialogue and action to build a common area of justice and to help increase the mutual trust in judicial systems across Europe. This includes, for example, the strengthening of procedural safeguards for citizens in criminal proceedings by promoting principles such as equality of arms. Improving the smooth functioning of judicial processes is also pursued by works of expert working groups in the field of forensic science, such as the working parties under the auspices of the European Network of Forensic Science Institutes (ENFSI). This network aims to share knowledge, exchange experiences and come to mutual agreements in matters concerning forensic science practice, among them the interpretation of results of forensic examinations. For example, through its Monopoly Programmes (financially supported by the European Commission), ENFSI has funded a series of projects that come under the general theme ‘Strengthening the Evaluation of Forensic Results across Europe’. Although these initiatives reflect a strong commitment to mutual understanding on general principles of forensic interpretation, the development of standards for evaluation and reporting, including roadmaps for implementation within the ENFSI community, are fraught with conceptual and practical hurdles. In particular, experience through consultations with forensic science practitioners shows that there is a considerable gap between the intentions of a harmonised view on principles of forensic interpretation and the way in which works towards such common understanding are perceived in the community. In this paper, we will review and discuss several recurrently raised concerns. We acknowledge practical constraints such as limited resources for training and education, but we shall also argue that addressing topics in forensic interpretation now is of vital importance because forensic science continues to be challenged by proactive participants in the legal process that tend to become more demanding and less forgiving

    Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy

    Full text link
    The analysis of gravitational wave data involves many model selection problems. The most important example is the detection problem of selecting between the data being consistent with instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data from ground based gravitational wave detectors is mostly conducted using classical statistics, and methods such as the Neyman-Pearson criteria are used for model selection. Future space based detectors, such as the \emph{Laser Interferometer Space Antenna} (LISA), are expected to produced rich data streams containing the signals from many millions of sources. Determining the number of sources that are resolvable, and the most appropriate description of each source poses a challenging model selection problem that may best be addressed in a Bayesian framework. An important class of LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands of which will be detectable. Not only are the number of sources unknown, but so are the number of parameters required to model the waveforms. For example, a significant subset of the resolvable galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor between competing models. Here we explore various methods for computing Bayes factors in the context of determining which galactic binaries have measurable frequency evolution. The methods explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to the model evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure

    The application of rhetorical structure theory to interactive news program generation from digital archives

    Get PDF
    Rhetorical structure theory (RST) provides a model of textual function based upon rhetoric. Initially developed as a model of text coherence, RST has been used extensively in text generation research, and has more recently been proposed as a basis for multimedia presentation generation. This paper investigates the use of RST for generating video presentations having a rhetorical form, using models of the rhetorical roles of video components, together with rules for selecting components for presentation on the basis of their rhetorical functions. An RST model can provide a predefined link structure providing viewers with options for obtaining and dynamically modifying rhetorically coherent video presentations from video archives and databases. The use of an RST analysis for interactive presentation generation may provide a more powerful rhetorical device than conventional linear video presentation. Conversely, making alternative RST analyses of the same video data available to users can have the effect of encouraging closer and more independent viewer analysis of the material, and discourage taking any particular rhetorical presentation at face value

    Constraining Antimatter Domains in the Early Universe with Big Bang Nucleosynthesis

    Full text link
    We consider the effect of a small-scale matter-antimatter domain structure on big bang nucleosynthesis and place upper limits on the amount of antimatter in the early universe. For small domains, which annihilate before nucleosynthesis, this limit comes from underproduction of He-4. For larger domains, the limit comes from He-3 overproduction. Most of the He-3 from antiproton-helium annihilation is annihilated also. The main source of He-3 is photodisintegration of He-4 by the electromagnetic cascades initiated by the annihilation.Comment: 4 pages, 2 figures, revtex, (slightly shortened

    Mechanisms underlying dioxygen reduction in laccases. Structural and modelling studies focusing on proton transfer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Laccases are enzymes that couple the oxidation of substrates with the reduction of dioxygen to water. They are the simplest members of the multi-copper oxidases and contain at least two types of copper centres; a mononuclear T1 and a trinuclear that includes two T3 and one T2 copper ions. Substrate oxidation takes place at the mononuclear centre whereas reduction of oxygen to water occurs at the trinuclear centre.</p> <p>Results</p> <p>In this study, the CotA laccase from <it>Bacillus subtilis </it>was used as a model to understand the mechanisms taking place at the molecular level, with a focus in the trinuclear centre. The structures of the holo-protein and of the oxidised form of the apo-protein, which has previously been reconstituted <it>in vitro </it>with Cu(I), have been determined. The former has a dioxygen moiety between the T3 coppers, while the latter has a monoatomic oxygen, here interpreted as a hydroxyl ion. The UV/visible spectra of these two forms have been analysed in the crystals and compared with the data obtained in solution. Theoretical calculations on these and other structures of CotA were used to identify groups that may be responsible for channelling the protons that are needed for reduction of dioxygen to water.</p> <p>Conclusions</p> <p>These results present evidence that Glu 498 is the only proton-active group in the vicinity of the trinuclear centre. This strongly suggests that this residue may be responsible for channelling the protons needed for the reduction. These results are compared with other data available for these enzymes, highlighting similarities and differences within laccases and multicopper oxidases.</p

    Hybrid Session Verification through Endpoint API Generation

    Get PDF
    © Springer-Verlag Berlin Heidelberg 2016.This paper proposes a new hybrid session verification methodology for applying session types directly to mainstream languages, based on generating protocol-specific endpoint APIs from multiparty session types. The API generation promotes static type checking of the behavioural aspect of the source protocol by mapping the state space of an endpoint in the protocol to a family of channel types in the target language. This is supplemented by very light run-time checks in the generated API that enforce a linear usage discipline on instances of the channel types. The resulting hybrid verification guarantees the absence of protocol violation errors during the execution of the session. We implement our methodology for Java as an extension to the Scribble framework, and use it to specify and implement compliant clients and servers for real-world protocols such as HTTP and SMTP

    Production and dilution of gravitinos by modulus decay

    Full text link
    We study the cosmological consequences of generic scalar fields like moduli which decay only through gravitationally suppressed interactions. We consider a new production mechanism of gravitinos from moduli decay, which might be more effective than previously known mechanisms, and calculate the final gravitino-to-entropy ratio to compare with the constraints imposed by successful big bang nucleosynthesis (BBN) etc., taking possible hadronic decays of gravitinos into account. We find the modulus mass smaller than ∌104\sim 10^4 TeV is excluded. On the other hand, inflation models with high reheating temperatures TR,inf∌1016T_{R,\rm inf} \sim 10^{16} GeV can be compatible with BBN thanks to the late-time entropy production from the moduli decay if model parameters are appropriately chosen.Comment: 18 pages, 4 figures, to appear in Phys. Rev.

    Lithium-6: A Probe of the Early Universe

    Get PDF
    I consider the synthesis of 6Li due to the decay of relic particles, such as gravitinos or moduli, after the epoch of Big Bang Nucleosynthesis. The synthesized 6Li/H ratio may be compared to 6Li/H in metal-poor stars which, in the absence of stellar depletion of 6Li, yields significantly stronger constraints on relic particle densities than the usual consideration of overproduction of 3He. Production of 6Li during such an era of non-thermal nucleosynthesis may also be regarded as a possible explanation for the relatively high 6Li/H ratios observed in metal-poor halo stars.Comment: final version, Physical Review Letters, additional figure giving limits on relic decaying particle

    Radiative Decay of a Long-Lived Particle and Big-Bang Nucleosynthesis

    Full text link
    The effects of radiatively decaying, long-lived particles on big-bang nucleosynthesis (BBN) are discussed. If high-energy photons are emitted after BBN, they may change the abundances of the light elements through photodissociation processes, which may result in a significant discrepancy between the BBN theory and observation. We calculate the abundances of the light elements, including the effects of photodissociation induced by a radiatively decaying particle, but neglecting the hadronic branching ratio. Using these calculated abundances, we derive a constraint on such particles by comparing our theoretical results with observations. Taking into account the recent controversies regarding the observations of the light-element abundances, we derive constraints for various combinations of the measurements. We also discuss several models which predict such radiatively decaying particles, and we derive constraints on such models.Comment: Published version in Phys. Rev. D. Typos in figure captions correcte

    An excess power statistic for detection of burst sources of gravitational radiation

    Get PDF
    We examine the properties of an excess power method to detect gravitational waves in interferometric detector data. This method is designed to detect short-duration (< 0.5 s) burst signals of unknown waveform, such as those from supernovae or black hole mergers. If only the bursts' duration and frequency band are known, the method is an optimal detection strategy in both Bayesian and frequentist senses. It consists of summing the data power over the known time interval and frequency band of the burst. If the detector noise is stationary and Gaussian, this sum is distributed as a chi-squared (non-central chi-squared) deviate in the absence (presence) of a signal. One can use these distributions to compute frequentist detection thresholds for the measured power. We derive the method from Bayesian analyses and show how to compute Bayesian thresholds. More generically, when only upper and/or lower bounds on the bursts duration and frequency band are known, one must search for excess power in all concordant durations and bands. Two search schemes are presented and their computational efficiencies are compared. We find that given reasonable constraints on the effective duration and bandwidth of signals, the excess power search can be performed on a single workstation. Furthermore, the method can be almost as efficient as matched filtering when a large template bank is required. Finally, we derive generalizations of the method to a network of several interferometers under the assumption of Gaussian noise.Comment: 22 pages, 6 figure
    • 

    corecore