557 research outputs found
Multi-COBS: A Novel Algorithm for Byte Stuffing at High Throughput
Framing methods are used to break a data stream into packets in most digital communications. The use of a reserved symbol to denote the frame boundaries is a popular practice. This end-of-frame (EOF) marker should be removed from the packet content in a reversible manner. Many strategies, such as the bit and byte stuffing processes employed by high-level data link control (HDLC) and Point-to-Point Protocol (PPP), or the Consistent Overhead Byte Stuffing (COBS), have been devised to perform this goal. These bit and byte stuffing algorithms remove the reserved EOF marker from the packet payload and replace it with some extra information that can be used to undo the action later. The amount of data added is called overhead and is a figure-of-merit of such algorithms, together with the encoding and decoding speed. Multi-COBS, a new byte stuffing algorithm, is presented in this paper. Multi-COBS provides concurrent encoding and decoding, resulting in a performance improvement of factor four or eight in common word-based digital architectures while delivering an average and worst-case overhead equivalent to the state-of-the-art. On the reference 28-nanometer field programmable gate array (FPGA) (Artix-7), Multi-COBS achieves a throughput of 6.6 Gbps, instead of 1.7 Gbps of COBS. Thanks to its parallel elaboration capability, Multi-COBS is ideal for digital systems built in programmable logic as well as modern computers
The impact of methodology on the reproducibility and rigor of DNA methylation data
Epigenetic modifications are crucial for normal development and implicated in disease pathogenesis. While epigenetics continues to be a burgeoning research area in neuroscience, unaddressed issues related to data reproducibility across laboratories remain. Separating meaningful experimental changes from background variability is a challenge in epigenomic studies. Here we show that seemingly minor experimental variations, even under normal baseline conditions, can have a significant impact on epigenome outcome measures and data interpretation. We examined genome-wide DNA methylation and gene expression profiles of hippocampal tissues from wild-type rats housed in three independent laboratories using nearly identical conditions. Reduced-representation bisulfite sequencing and RNA-seq respectively identified 3852 differentially methylated and 1075 differentially expressed genes between laboratories, even in the absence of experimental intervention. Difficult-to-match factors such as animal vendors and a subset of husbandry and tissue extraction procedures produced quantifiable variations between wild-type animals across the three laboratories. Our study demonstrates that seemingly minor experimental variations, even under normal baseline conditions, can have a significant impact on epigenome outcome measures and data interpretation. This is particularly meaningful for neurological studies in animal models, in which baseline parameters between experimental groups are difficult to control. To enhance scientific rigor, we conclude that strict adherence to protocols is necessary for the execution and interpretation of epigenetic studies and that protocol-sensitive epigenetic changes, amongst naive animals, may confound experimental results
Photon counting with photon number resolution through superconducting nanowires coupled to a multi-channel TDC in FPGA
The paper presents a system for measuring photon statistics and photon timing in the few-photon regime down to the single-photon level. The measurement system is based on superconducting nanowire single photon detectors and a time-to-digital converter implemented into a programmable device. The combination of these devices gives high performance to the system in terms of resolution and adaptability to the actual experimental conditions. As a case of application, we present the measurement of photon statistics for coherent light states. In this measurement, we make use of 8th order single photon correlations to reconstruct with high fidelity the statistics of a coherent state with average photon number up to 4. The processing is performed by means of a tapped-delay-line time-to-digital converter architecture that also hosts an asynchronous-correlated-digital-counter implemented in a field programmable gate array device and specifically designed for performance optimization in multi-channel usage
Observation of resonances consistent with pentaquark states in decays
Observations of exotic structures in the channel, that we refer to
as pentaquark-charmonium states, in decays are
presented. The data sample corresponds to an integrated luminosity of 3/fb
acquired with the LHCb detector from 7 and 8 TeV pp collisions. An amplitude
analysis is performed on the three-body final-state that reproduces the
two-body mass and angular distributions. To obtain a satisfactory fit of the
structures seen in the mass spectrum, it is necessary to include two
Breit-Wigner amplitudes that each describe a resonant state. The significance
of each of these resonances is more than 9 standard deviations. One has a mass
of MeV and a width of MeV, while the second
is narrower, with a mass of MeV and a width of MeV. The preferred assignments are of opposite parity, with one
state having spin 3/2 and the other 5/2.Comment: 48 pages, 18 figures including the supplementary material, v2 after
referee's comments, now 19 figure
Measurement of the branching fraction ratio
Using collision data collected by LHCb at center-of-mass energies
= 7 TeV and 8 TeV, corresponding to an integrated luminosity of 3
fb, the ratio of the branching fraction of the decay relative to that of the
decay is measured to be 0.268 0.032 (stat) 0.007 (syst) 0.006
(BF). The first uncertainty is statistical, the second is systematic, and the
third is due to the uncertainties on the branching fractions of the and decays. This
measurement is consistent with the previous LHCb result, and the statistical
uncertainty is halved.Comment: 17 pages including author list, 2 figure
Measurement of the mass and lifetime of the baryon
A proton-proton collision data sample, corresponding to an integrated
luminosity of 3 fb collected by LHCb at and 8 TeV, is used
to reconstruct , decays. Using the , decay mode for calibration, the lifetime ratio and absolute
lifetime of the baryon are measured to be \begin{align*}
\frac{\tau_{\Omega_b^-}}{\tau_{\Xi_b^-}} &= 1.11\pm0.16\pm0.03, \\
\tau_{\Omega_b^-} &= 1.78\pm0.26\pm0.05\pm0.06~{\rm ps}, \end{align*} where the
uncertainties are statistical, systematic and from the calibration mode (for
only). A measurement is also made of the mass difference,
, and the corresponding mass, which
yields \begin{align*} m_{\Omega_b^-}-m_{\Xi_b^-} &= 247.4\pm3.2\pm0.5~{\rm
MeV}/c^2, \\ m_{\Omega_b^-} &= 6045.1\pm3.2\pm 0.5\pm0.6~{\rm MeV}/c^2.
\end{align*} These results are consistent with previous measurements.Comment: 11 pages, 5 figures, All figures and tables, along with any
supplementary material and additional information, are available at
https://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2016-008.htm
Measurement of the ratio of branching fractions
The branching fraction ratio is measured using a sample of proton-proton
collision data corresponding to 3.0\invfb of integrated luminosity recorded by
the LHCb experiment during 2011 and 2012. The tau lepton is identified in the
decay mode . The
semitauonic decay is sensitive to contributions from non-Standard-Model
particles that preferentially couple to the third generation of fermions, in
particular Higgs-like charged scalars. A multidimensional fit to kinematic
distributions of the candidate decays gives
. This result,
which is the first measurement of this quantity at a hadron collider, is
standard deviations larger than the value expected from lepton universality in
the Standard Model.Comment: 17 pages, 1 figure. v2 after referees' comment
A new algorithm for identifying the flavour of mesons at LHCb
A new algorithm for the determination of the initial flavour of
mesons is presented. The algorithm is based on two neural networks and exploits
the hadron production mechanism at a hadron collider. The first network is
trained to select charged kaons produced in association with the meson.
The second network combines the kaon charges to assign the flavour and
estimates the probability of a wrong assignment. The algorithm is calibrated
using data corresponding to an integrated luminosity of 3 fb collected
by the LHCb experiment in proton-proton collisions at 7 and 8 TeV
centre-of-mass energies. The calibration is performed in two ways: by resolving
the - flavour oscillations in
decays, and by analysing flavour-specific
decays. The tagging power measured in decays is found
to be \%, which is an
improvement of about 50\% compared to a similar algorithm previously used in
the LHCb experiment.Comment: All figures and tables, along with any supplementary material and
additional information, are available at
https://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2015-056.htm
- …