619 research outputs found

    S-PRAC: Fast Partial Packet Recovery with Network Coding in Very Noisy Wireless Channels

    Full text link
    Well-known error detection and correction solutions in wireless communications are slow or incur high transmission overhead. Recently, notable solutions like PRAC and DAPRAC, implementing partial packet recovery with network coding, could address these problems. However, they perform slowly when there are many errors. We propose S-PRAC, a fast scheme for partial packet recovery, particularly designed for very noisy wireless channels. S-PRAC improves on DAPRAC. It divides each packet into segments consisting of a fixed number of small RLNC encoded symbols and then attaches a CRC code to each segment and one to each coded packet. Extensive simulations show that S-PRAC can detect and correct errors quickly. It also outperforms DAPRAC significantly when the number of errors is high

    BAC-HAPPY mapping (BAP mapping): a new and efficient protocol for physical mapping

    Get PDF
    Physical and linkage mapping underpin efforts to sequence and characterize the genomes of eukaryotic organisms by providing a skeleton framework for whole genome assembly. Hitherto, linkage and physical “contig” maps were generated independently prior to merging. Here, we develop a new and easy method, BAC HAPPY MAPPING (BAP mapping), that utilizes BAC library pools as a HAPPY mapping panel together with an Mbp-sized DNA panel to integrate the linkage and physical mapping efforts into one pipeline. Using Arabidopsis thaliana as an exemplar, a set of 40 Sequence Tagged Site (STS) markers spanning ~10% of chromosome 4 were simultaneously assembled onto a BAP map compiled using both a series of BAC pools each comprising 0.7x genome coverage and dilute (0.7x genome) samples of sheared genomic DNA. The resultant BAP map overcomes the need for polymorphic loci to separate genetic loci by recombination and allows physical mapping in segments of suppressed recombination that are difficult to analyze using traditional mapping techniques. Even virtual “BAC-HAPPY-mapping” to convert BAC landing data into BAC linkage contigs is possible.Giang T. H. Vu, Paul H. Dear, Peter D. S. Caligari and Mike J. Wilkinso

    Use of a global model to understand speciated atmospheric mercury observations at five high-elevation sites

    Get PDF
    © 2015 Author(s). Atmospheric mercury (Hg) measurements using the TekranÂź analytical system from five high-elevation sites (1400-3200 m elevation), one in Asia and four in the western US, were compiled over multiple seasons and years, and these data were compared with the GEOS-Chem global model. Mercury data consisted of gaseous elemental Hg (GEM) and "reactive Hg" (RM), which is a combination of the gaseous oxidized (GOM) and particulate bound ( < 2.5 ÎŒm) (PBM) fractions as measured by the TekranÂź system. We used a subset of the observations by defining a "free tropospheric" (FT) data set by screening using measured water vapor mixing ratios. The oxidation scheme used by the GEOS-Chem model was varied between the standard run with Br oxidation and an alternative run with OH-O 3 oxidation. We used this model-measurement comparison to help interpret the spatio-temporal trends in, and relationships among, the Hg species and ancillary parameters, to understand better the sources and fate of atmospheric RM. The most salient feature of the data across sites, seen more in summer relative to spring, was that RM was negatively correlated with GEM and water vapor mixing ratios (WV) and positively correlated with ozone (O 3 ), both in the standard model and the observations, indicating that RM was formed in dry upper altitude air from the photo-oxidation of GEM. During a free tropospheric transport high RM event observed sequentially at three sites from Oregon to Nevada, the slope of the RM/GEM relationship at the westernmost site was-1020 ± 209 pg ng -1 , indicating near-quantitative GEM-to-RM photochemical conversion. An improved correlation between the observations and the model was seen when the model was run with the OH-O3 oxidation scheme instead of the Br oxidation scheme. This simulation produced higher concentrations of RM and lower concentrations of GEM, especially at the desert sites in northwestern Nevada. This suggests that future work should investigate the effect of Br-and O 3 -initiated gas-phase oxidation occurring simultaneously in the atmosphere, as well as aqueous and heterogeneous reactions to understand whether there are multiple global oxidants for GEM and hence multiple forms of RM in the atmosphere. If the chemical forms of RM were known, then the collection efficiency of the analytical method could be evaluated better.Taiwan. Environmental Protection Administratio

    Comparative characterization of a wild type and transmembrane domain-deleted fatty acid amide hydrolase: identification of the transmembrane domain as a site for oligomerization

    Get PDF
    Fatty acid amide hydrolase (FAAH) is an integral membrane protein responsible for the hydrolysis of a number of primary and secondary fatty acid amides, including the neuromodulatory compounds anandamide and oleamide. Analysis of FAAH's primary sequence reveals the presence of a single predicted transmembrane domain at the extreme N-terminus of the enzyme. A mutant form of the rat FAAH protein lacking this N-terminal transmembrane domain (DeltaTM-FAAH) was generated and, like wild type FAAH (WT-FAAH), was found to be tightly associated with membranes when expressed in COS-7 cells. Recombinant forms of WT- and DeltaTM-FAAH expressed and purified from Escherichia coli exhibited essentially identical enzymatic properties which were also similar to those of the native enzyme from rat liver. Analysis of the oligomerization states of WT- and DeltaTM-FAAH by chemical cross-linking, sedimentation velocity analytical ultracentrifugation, and size exclusion chromatography indicated that both enzymes were oligomeric when membrane-bound and after solubilization. However, WT-FAAH consistently behaved as a larger oligomer than DeltaTM-FAAH. Additionally, SDS-PAGE analysis of the recombinant proteins identified the presence of SDS-resistant oligomers for WT-FAAH, but not for DeltaTM-FAAH. Self-association through FAAH's transmembrane domain was further demonstrated by a FAAH transmembrane domain-GST fusion protein which formed SDS-resistant dimers and large oligomeric assemblies in solution

    High spatial resolution myocardial perfusion cardiac magnetic resonance for the detection of coronary artery disease

    Get PDF
    To evaluate the feasibility and diagnostic performance of high spatial resolution myocardial perfusion cardiac magnetic resonance (perfusion-CMR). Methods and results Fifty-four patients underwent adenosine stress perfusion-CMR. An in-plane spatial resolution of 1.4 x 1.4 mm(2) was achieved by using 5x k-space and time sensitivity encoding (k-t SENSE). Perfusion was visually graded for 16 left ventricular and two right ventricular (RV) segments on a scale from 0 = normal to 3 = abnormal, yielding a perfusion score of 0-54. Diagnostic accuracy of the perfusion score to detect coronary artery stenosis of >50% on quantitative coronary angiography was determined. Sources and extent of image artefacts were documented. Two studies (4%) were non-diagnostic because of k-t SENSE-related and breathing artefacts. Endocardial dark rim artefacts if present were small (average width 1.6 mm). Analysis by receiver-operating characteristics yielded an area under the curve for detection of coronary stenosis of 0.85 [95% confidence interval (CI) 0.75-0.95] for all patients and 0.82 (95% CI 0.65-0.94) and 0.87 (95% CI 0.75-0.99) for patients with single and multi-vessel disease, respectively. Seventy-four of 102 (72%) RV segments could be analysed. Conclusion High spatial resolution perfusion-CMR is feasible in a clinical population, yields high accuracy to detect single and multi-vessel coronary artery disease, minimizes artefacts and may permit the assessment of RV perfusion

    Book Reviews

    Get PDF
    With the observation of high-energy astrophysical neutrinos by the IceCube Neutrino Observatory, interest has risen in models of PeV-mass decaying dark matter particles to explain the observed flux. We present two dedicated experimental analyses to test this hypothesis. One analysis uses 6 years of IceCube data focusing on muon neutrino ‘track’ events from the Northern Hemisphere, while the second analysis uses 2 years of ‘cascade’ events from the full sky. Known background components and the hypothetical flux from unstable dark matter are fitted to the experimental data. Since no significant excess is observed in either analysis, lower limits on the lifetime of dark matter particles are derived: we obtain the strongest constraint to date, excluding lifetimes shorter than 102810^{28} s at 90% CL for dark matter masses above 10 TeV

    The IceCube Neutrino Observatory: Instrumentation and Online Systems

    Get PDF
    The IceCube Neutrino Observatory is a cubic-kilometer-scale high-energy neutrino detector built into the ice at the South Pole. Construction of IceCube, the largest neutrino detector built to date, was completed in 2011 and enabled the discovery of high-energy astrophysical neutrinos. We describe here the design, production, and calibration of the IceCube digital optical module (DOM), the cable systems, computing hardware, and our methodology for drilling and deployment. We also describe the online triggering and data filtering systems that select candidate neutrino and cosmic ray events for analysis. Due to a rigorous pre-deployment protocol, 98.4% of the DOMs in the deep ice are operating and collecting data. IceCube routinely achieves a detector uptime of 99% by emphasizing software stability and monitoring. Detector operations have been stable since construction was completed, and the detector is expected to operate at least until the end of the next decade.Comment: 83 pages, 50 figures; updated with minor changes from journal review and proofin
    • 

    corecore