132,066 research outputs found
LittleDarwin: a Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems
Mutation testing is a well-studied method for increasing the quality of a
test suite. We designed LittleDarwin as a mutation testing framework able to
cope with large and complex Java software systems, while still being easily
extensible with new experimental components. LittleDarwin addresses two
existing problems in the domain of mutation testing: having a tool able to work
within an industrial setting, and yet, be open to extension for cutting edge
techniques provided by academia. LittleDarwin already offers higher-order
mutation, null type mutants, mutant sampling, manual mutation, and mutant
subsumption analysis. There is no tool today available with all these features
that is able to work with typical industrial software systems.Comment: Pre-proceedings of the 7th IPM International Conference on
Fundamentals of Software Engineerin
Recommended from our members
A multi-modal data resource for investigating topographic heterogeneity in patient-derived xenograft tumors.
Patient-derived xenografts (PDXs) are an essential pre-clinical resource for investigating tumor biology. However, cellular heterogeneity within and across PDX tumors can strongly impact the interpretation of PDX studies. Here, we generated a multi-modal, large-scale dataset to investigate PDX heterogeneity in metastatic colorectal cancer (CRC) across tumor models, spatial scales and genomic, transcriptomic, proteomic and imaging assay modalities. To showcase this dataset, we present analysis to assess sources of PDX variation, including anatomical orientation within the implanted tumor, mouse contribution, and differences between replicate PDX tumors. A unique aspect of our dataset is deep characterization of intra-tumor heterogeneity via immunofluorescence imaging, which enables investigation of variation across multiple spatial scales, from subcellular to whole tumor levels. Our study provides a benchmark data resource to investigate PDX models of metastatic CRC and serves as a template for future, quantitative investigations of spatial heterogeneity within and across PDX tumor models
Using Bad Learners to find Good Configurations
Finding the optimally performing configuration of a software system for a
given setting is often challenging. Recent approaches address this challenge by
learning performance models based on a sample set of configurations. However,
building an accurate performance model can be very expensive (and is often
infeasible in practice). The central insight of this paper is that exact
performance values (e.g. the response time of a software system) are not
required to rank configurations and to identify the optimal one. As shown by
our experiments, models that are cheap to learn but inaccurate (with respect to
the difference between actual and predicted performance) can still be used rank
configurations and hence find the optimal configuration. This novel
\emph{rank-based approach} allows us to significantly reduce the cost (in terms
of number of measurements of sample configuration) as well as the time required
to build models. We evaluate our approach with 21 scenarios based on 9 software
systems and demonstrate that our approach is beneficial in 16 scenarios; for
the remaining 5 scenarios, an accurate model can be built by using very few
samples anyway, without the need for a rank-based approach.Comment: 11 pages, 11 figure
PCR biases distort bacterial and archaeal community structure in pyrosequencing datasets
As 16S rRNA gene targeted massively parallel sequencing has become a common tool for microbial diversity investigations, numerous advances have been made to minimize the influence of sequencing and chimeric PCR artifacts through rigorous quality control measures. However, there has been little effort towards understanding the effect of multi-template PCR biases on microbial community structure. In this study, we used three bacterial and three archaeal mock communities consisting of, respectively, 33 bacterial and 24 archaeal 16S rRNA gene sequences combined in different proportions to compare the influences of (1) sequencing depth, (2) sequencing artifacts (sequencing errors and chimeric PCR artifacts), and (3) biases in multi-template PCR, towards the interpretation of community structure in pyrosequencing datasets. We also assessed the influence of each of these three variables on α- and β-diversity metrics that rely on the number of OTUs alone (richness) and those that include both membership and the relative abundance of detected OTUs (diversity). As part of this study, we redesigned bacterial and archaeal primer sets that target the V3–V5 region of the 16S rRNA gene, along with multiplexing barcodes, to permit simultaneous sequencing of PCR products from the two domains. We conclude that the benefits of deeper sequencing efforts extend beyond greater OTU detection and result in higher precision in β-diversity analyses by reducing the variability between replicate libraries, despite the presence of more sequencing artifacts. Additionally, spurious OTUs resulting from sequencing errors have a significant impact on richness or shared-richness based α- and β-diversity metrics, whereas metrics that utilize community structure (including both richness and relative abundance of OTUs) are minimally affected by spurious OTUs. However, the greatest obstacle towards accurately evaluating community structure are the errors in estimated mean relative abundance of each detected OTU due to biases associated with multi-template PCR reactions
A Time Truncated Moving Average Chart for the Weibull Distribution
A control chart of monitoring the number of failures is proposed with a moving average scheme, when the life of an item follows a Weibull distribution. A specified number of items are put on a time truncated life test and the number of failures is observed. The proposed control chart has been evaluated by the average run lengths (ARLs) under different parameter settings. The control constant and the test time multiplier are to be determined by considering the in-control ARL. It is observed that the proposed control chart is more efficient in detecting a shift in the process as compared with the existing time truncated control chart. ? 2013 IEEE.11Ysciescopu
JUNO Conceptual Design Report
The Jiangmen Underground Neutrino Observatory (JUNO) is proposed to determine
the neutrino mass hierarchy using an underground liquid scintillator detector.
It is located 53 km away from both Yangjiang and Taishan Nuclear Power Plants
in Guangdong, China. The experimental hall, spanning more than 50 meters, is
under a granite mountain of over 700 m overburden. Within six years of running,
the detection of reactor antineutrinos can resolve the neutrino mass hierarchy
at a confidence level of 3-4, and determine neutrino oscillation
parameters , , and to
an accuracy of better than 1%. The JUNO detector can be also used to study
terrestrial and extra-terrestrial neutrinos and new physics beyond the Standard
Model. The central detector contains 20,000 tons liquid scintillator with an
acrylic sphere of 35 m in diameter. 17,000 508-mm diameter PMTs with high
quantum efficiency provide 75% optical coverage. The current choice of
the liquid scintillator is: linear alkyl benzene (LAB) as the solvent, plus PPO
as the scintillation fluor and a wavelength-shifter (Bis-MSB). The number of
detected photoelectrons per MeV is larger than 1,100 and the energy resolution
is expected to be 3% at 1 MeV. The calibration system is designed to deploy
multiple sources to cover the entire energy range of reactor antineutrinos, and
to achieve a full-volume position coverage inside the detector. The veto system
is used for muon detection, muon induced background study and reduction. It
consists of a Water Cherenkov detector and a Top Tracker system. The readout
system, the detector control system and the offline system insure efficient and
stable data acquisition and processing.Comment: 328 pages, 211 figure
JPL spacecraft sterilization technology program - A status report
Facility description and procedures for heat and ethylene oxide sterilization of spacecraft instrumentation, components, and material
Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions
Massive MIMO is a compelling wireless access concept that relies on the use
of an excess number of base-station antennas, relative to the number of active
terminals. This technology is a main component of 5G New Radio (NR) and
addresses all important requirements of future wireless standards: a great
capacity increase, the support of many simultaneous users, and improvement in
energy efficiency. Massive MIMO requires the simultaneous processing of signals
from many antenna chains, and computational operations on large matrices. The
complexity of the digital processing has been viewed as a fundamental obstacle
to the feasibility of Massive MIMO in the past. Recent advances on
system-algorithm-hardware co-design have led to extremely energy-efficient
implementations. These exploit opportunities in deeply-scaled silicon
technologies and perform partly distributed processing to cope with the
bottlenecks encountered in the interconnection of many signals. For example,
prototype ASIC implementations have demonstrated zero-forcing precoding in real
time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing
of 8 terminals). Coarse and even error-prone digital processing in the antenna
paths permits a reduction of consumption with a factor of 2 to 5. This article
summarizes the fundamental technical contributions to efficient digital signal
processing for Massive MIMO. The opportunities and constraints on operating on
low-complexity RF and analog hardware chains are clarified. It illustrates how
terminals can benefit from improved energy efficiency. The status of technology
and real-life prototypes discussed. Open challenges and directions for future
research are suggested.Comment: submitted to IEEE transactions on signal processin
- …