51,545 research outputs found
Fast and flexible selection with a single switch
Selection methods that require only a single-switch input, such as a button
click or blink, are potentially useful for individuals with motor impairments,
mobile technology users, and individuals wishing to transmit information
securely. We present a single-switch selection method, "Nomon," that is general
and efficient. Existing single-switch selection methods require selectable
options to be arranged in ways that limit potential applications. By contrast,
traditional operating systems, web browsers, and free-form applications (such
as drawing) place options at arbitrary points on the screen. Nomon, however,
has the flexibility to select any point on a screen. Nomon adapts automatically
to an individual's clicking ability; it allows a person who clicks precisely to
make a selection quickly and allows a person who clicks imprecisely more time
to make a selection without error. Nomon reaps gains in information rate by
allowing the specification of beliefs (priors) about option selection
probabilities and by avoiding tree-based selection schemes in favor of direct
(posterior) inference. We have developed both a Nomon-based writing application
and a drawing application. To evaluate Nomon's performance, we compared the
writing application with a popular existing method for single-switch writing
(row-column scanning). Novice users wrote 35% faster with the Nomon interface
than with the scanning interface. An experienced user (author TB, with > 10
hours practice) wrote at speeds of 9.3 words per minute with Nomon, using 1.2
clicks per character and making no errors in the final text.Comment: 14 pages, 5 figures, 1 table, presented at NIPS 2009 Mini-symposi
The Lock-free -LSM Relaxed Priority Queue
Priority queues are data structures which store keys in an ordered fashion to
allow efficient access to the minimal (maximal) key. Priority queues are
essential for many applications, e.g., Dijkstra's single-source shortest path
algorithm, branch-and-bound algorithms, and prioritized schedulers.
Efficient multiprocessor computing requires implementations of basic data
structures that can be used concurrently and scale to large numbers of threads
and cores. Lock-free data structures promise superior scalability by avoiding
blocking synchronization primitives, but the \emph{delete-min} operation is an
inherent scalability bottleneck in concurrent priority queues. Recent work has
focused on alleviating this obstacle either by batching operations, or by
relaxing the requirements to the \emph{delete-min} operation.
We present a new, lock-free priority queue that relaxes the \emph{delete-min}
operation so that it is allowed to delete \emph{any} of the smallest
keys, where is a runtime configurable parameter. Additionally, the
behavior is identical to a non-relaxed priority queue for items added and
removed by the same thread. The priority queue is built from a logarithmic
number of sorted arrays in a way similar to log-structured merge-trees. We
experimentally compare our priority queue to recent state-of-the-art lock-free
priority queues, both with relaxed and non-relaxed semantics, showing high
performance and good scalability of our approach.Comment: Short version as ACM PPoPP'15 poste
Competent genetic-evolutionary optimization of water distribution systems
A genetic algorithm has been applied to the optimal design and rehabilitation of a water distribution system. Many of the previous applications have been limited to small water distribution systems, where the computer time used for solving the problem has been relatively small. In order to apply genetic and evolutionary optimization technique to a large-scale water distribution system, this paper employs one of competent genetic-evolutionary algorithms - a messy genetic algorithm to enhance the efficiency of an optimization procedure. A maximum flexibility is ensured by the formulation of a string and solution representation scheme, a fitness definition, and the integration of a well-developed hydraulic network solver that facilitate the application of a genetic algorithm to the optimization of a water distribution system. Two benchmark problems of water pipeline design and a real water distribution system are presented to demonstrate the application of the improved technique. The results obtained show that the number of the design trials required by the messy genetic algorithm is consistently fewer than the other genetic algorithms
Bovine polledness
The persistent horns are an important trait of speciation for the family Bovidae with complex morphogenesis taking place briefly after birth. The polledness is highly favourable in modern cattle breeding systems but serious animal welfare issues urge for a solution in the production of hornless cattle other than dehorning. Although the dominant inhibition of horn morphogenesis was discovered more than 70 years ago, and the causative mutation was mapped almost 20 years ago, its molecular nature remained unknown. Here, we report allelic heterogeneity of the POLLED locus. First, we mapped the POLLED locus to a ∼381-kb interval in a multi-breed case-control design. Targeted re-sequencing of an enlarged candidate interval (547 kb) in 16 sires with known POLLED genotype did not detect a common allele associated with polled status. In eight sires of Alpine and Scottish origin (four polled versus four horned), we identified a single candidate mutation, a complex 202 bp insertion-deletion event that showed perfect association to the polled phenotype in various European cattle breeds, except Holstein-Friesian. The analysis of the same candidate interval in eight Holsteins identified five candidate variants which segregate as a 260 kb haplotype also perfectly associated with the POLLED gene without recombination or interference with the 202 bp insertion-deletion. We further identified bulls which are progeny tested as homozygous polled but bearing both, 202 bp insertion-deletion and Friesian haplotype. The distribution of genotypes of the two putative POLLED alleles in large semi-random sample (1,261 animals) supports the hypothesis of two independent mutations
Recommended from our members
OMMA enables population-scale analysis of complex genomic features and phylogenomic relationships from nanochannel-based optical maps.
BackgroundOptical mapping is an emerging technology that complements sequencing-based methods in genome analysis. It is widely used in improving genome assemblies and detecting structural variations by providing information over much longer (up to 1 Mb) reads. Current standards in optical mapping analysis involve assembling optical maps into contigs and aligning them to a reference, which is limited to pairwise comparison and becomes bias-prone when analyzing multiple samples.FindingsWe present a new method, OMMA, that extends optical mapping to the study of complex genomic features by simultaneously interrogating optical maps across many samples in a reference-independent manner. OMMA captures and characterizes complex genomic features, e.g., multiple haplotypes, copy number variations, and subtelomeric structures when applied to 154 human samples across the 26 populations sequenced in the 1000 Genomes Project. For small genomes such as pathogenic bacteria, OMMA accurately reconstructs the phylogenomic relationships and identifies functional elements across 21 Acinetobacter baumannii strains.ConclusionsWith the increasing data throughput of optical mapping system, the use of this technology in comparative genome analysis across many samples will become feasible. OMMA is a timely solution that can address such computational need. The OMMA software is available at https://github.com/TF-Chan-Lab/OMTools
Confounding Equivalence in Causal Inference
The paper provides a simple test for deciding, from a given causal diagram,
whether two sets of variables have the same bias-reducing potential under
adjustment. The test requires that one of the following two conditions holds:
either (1) both sets are admissible (i.e., satisfy the back-door criterion) or
(2) the Markov boundaries surrounding the manipulated variable(s) are identical
in both sets. Applications to covariate selection and model testing are
discussed.Comment: Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010
High Performance Biological Pairwise Sequence Alignment: FPGA versus GPU versus Cell BE versus GPP
This paper explores the pros and cons of reconfigurable computing in the form of FPGAs for high performance efficient computing. In particular, the paper presents the results of a comparative study between three different acceleration technologies, namely, Field Programmable Gate Arrays (FPGAs), Graphics Processor Units (GPUs), and IBM’s Cell Broadband Engine (Cell BE), in the design and implementation of the widely-used Smith-Waterman pairwise sequence alignment algorithm, with general purpose processors as a base reference implementation. Comparison criteria include speed, energy consumption, and purchase and development costs. The study shows that FPGAs largely outperform all other implementation platforms on performance per watt criterion and perform better than all other platforms on performance per dollar criterion, although by a much smaller margin. Cell BE and GPU come second and third, respectively, on both performance per watt and performance per dollar criteria. In general, in order to outperform other technologies on performance per dollar criterion (using currently available hardware and development tools), FPGAs need to achieve at least two orders of magnitude speed-up compared to general-purpose processors and one order of magnitude speed-up compared to domain-specific technologies such as GPUs
- …