506 research outputs found
Practical Evaluation of Lempel-Ziv-78 and Lempel-Ziv-Welch Tries
We present the first thorough practical study of the Lempel-Ziv-78 and the
Lempel-Ziv-Welch computation based on trie data structures. With a careful
selection of trie representations we can beat well-tuned popular trie data
structures like Judy, m-Bonsai or Cedar
A methodology for determining amino-acid substitution matrices from set covers
We introduce a new methodology for the determination of amino-acid
substitution matrices for use in the alignment of proteins. The new methodology
is based on a pre-existing set cover on the set of residues and on the
undirected graph that describes residue exchangeability given the set cover.
For fixed functional forms indicating how to obtain edge weights from the set
cover and, after that, substitution-matrix elements from weighted distances on
the graph, the resulting substitution matrix can be checked for performance
against some known set of reference alignments and for given gap costs. Finding
the appropriate functional forms and gap costs can then be formulated as an
optimization problem that seeks to maximize the performance of the substitution
matrix on the reference alignment set. We give computational results on the
BAliBASE suite using a genetic algorithm for optimization. Our results indicate
that it is possible to obtain substitution matrices whose performance is either
comparable to or surpasses that of several others, depending on the particular
scenario under consideration
Digital media in primary schools: literacy or technology? Analyzing government and media discourses
This article examines the political and the media discourses concerning the
Portuguese governmental program responsible for delivering a laptop named
“Magalhães” to all primary school children. The analysis is based on the
official documents related to the launch and development of the initiative as
well as the press coverage of this topic. The main purpose is to recognize
the dominant public discourses and to find out what the media select for
the debate in the public sphere. This analysis was done with a particular
focus on the critical media literacy framework. The results reveal that the
press highlighted the negative aspects of that program and that this framing
could have a strong impact on how it was accepted and understood by the
public opinion. Analysis also reveals that the governmental initiative was
predominantly driven by technological objectives, in particular the access to
technology, rather than media literacy objectives.The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This paper is part of a three years project named "Navigating with 'Magalhaes': Study on the Impact of Digital Media in Schoolchildren" funded by FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) and co-funded by FEDER - Fundo Europeu de Desenvolvimento Regional (ERDF: European Regional Development Fund) through COMPETE - Programa Operacional Factores de Competitividade (Operational Competitiveness Programme)
Electromagnetic plasma modeling in circuit breaker within the finite volume method
In order to ensure the galvanic isolation of an electrical system following a manual
operation or a default strike, current limitation properties of the electric arc are used, forcing a
fast decrease to zero current. Modeling this process reveals complex, since it involves a large
amount of physical phenomena (radiation, phase transitions, electromagnetism, fluid
dynamics, plasma physics). In order to get a robust solving, enhancing strongly coupled
resolution and time constants compatibility, the Finite Volume Method has been chosen. This
method was first implemented on intrinsic electromagnetism problems (current flow,
magnetostatics including non-linear materials, and magnetodynamics). Once validated, the
models have been successfully used in the Schneider's current-interruption dedicated
software, thus allowing a significantly improved simulation of Schneider Electric circuit
breakers
SWIFT: Using task-based parallelism, fully asynchronous communication, and graph partition-based domain decomposition for strong scaling on more than 100,000 cores
We present a new open-source cosmological code, called SWIFT, designed to solve the equations of hydrodynamics using a particle-based approach (Smooth Particle Hydrodynamics) on hybrid shared / distributed-memory architectures. SWIFT was designed from the bottom up to provide excellent strong scaling on both commodity clusters (Tier-2 systems) and Top100-supercomputers (Tier-0 systems), without relying on architecture-specific features or specialized accelerator hardware. This performance is due to three main computational approaches: • Task-based parallelism for shared-memory parallelism, which provides fine-grained load balancing and thus strong scaling on large numbers of cores. • Graph-based domain decomposition, which uses the task graph to decompose the simulation domain such that the work, as opposed to just the data, as is the case with most partitioning schemes, is equally distributed across all nodes. • Fully dynamic and asynchronous communication, in which communication is modelled as just another task in the task-based scheme, sending data whenever it is ready and deferring on tasks that rely on data from other nodes until it arrives. In order to use these approaches, the code had to be re-written from scratch, and the algorithms therein adapted to the task-based paradigm. As a result, we can show upwards of 60% parallel efficiency for moderate-sized problems when increasing the number of cores 512-fold, on both x86-based and Power8-based architectures
Detecting non-orthology in the COGs database and other approaches grouping orthologs using genome-specific best hits
Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings
Poly(ethylene) glycols and mechanochemistry for the preparation of bioactive 3,5-disubstituted hydantoins
International audienceMechanochemistry was effective for the preparation of 3,5-disubstituted hydantoins from a-amino methyl esters, using either 1,1 0-carbonyldiimidazole (CDI) or alkyl isocyanates. The preparation of the antimicrobial additives, 3-allyl-5,5 0-dimethyl hydantoin (ADMH) and 1-chloro-3-ethyl-5,5 0-dimethyl hydantoin (CEDMH) were performed by grinding. A chlorination reaction, never described before by mechanochemistry was achieved by Ca(ClO) 2 , while the preparation of the bioactive anticonvulsant marketed drug ethotoin was achieved by a novel approach based on poly(ethylene) glycol (PEGs) assisted grinding
A General Approach for Predicting the Filtration of Soft and Permeable Colloids: The Milk Example
Membrane filtration operations (ultra-, microfiltration) are now extensively used for concentrating or separating an ever-growing variety of colloidal dispersions. However, the phenomena that determine the efficiency of these operations are not yet fully understood. This is especially the case when dealing with colloids that are soft, deformable, and permeable. In this paper, we propose a methodology for building a model that is able to predict the performance (flux, concentration profiles) of the filtration of such objects in relation with the operating conditions. This is done by focusing on the case of milk filtration, all experiments being performed with dispersions of milk casein micelles, which are sort of ″natural″ colloidal microgels. Using this example, we develop the general idea that a filtration model can always be built for a given colloidal dispersion as long as this dispersion has been characterized in terms of osmotic pressure Π and hydraulic permeability k. For soft and permeable colloids, the major issue is that the permeability k cannot be assessed in a trivial way like in the case for hard-sphere colloids. To get around this difficulty, we follow two distinct approaches to actually measure k: a direct approach, involving osmotic stress experiments, and a reverse-calculation approach, that consists of estimating k through well-controlled filtration experiments. The resulting filtration model is then validated against experimental measurements obtained from combined milk filtration/SAXS experiments. We also give precise examples of how the model can be used, as well as a brief discussion on the possible universality of the approach presented here
Recommended from our members
Parallel computing in information retrieval - An updated review
The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for Text Retrieval. We analyse parallel IR systems using a classification due to Rasmussen [1] and describe some parallel IR systems. We give a description of the retrieval models used in parallel Information Processing.. We describe areas of research which we believe are needed
- …
