267 research outputs found
A Computational Method for the Rate Estimation of Evolutionary Transpositions
Genome rearrangements are evolutionary events that shuffle genomic
architectures. Most frequent genome rearrangements are reversals,
translocations, fusions, and fissions. While there are some more complex genome
rearrangements such as transpositions, they are rarely observed and believed to
constitute only a small fraction of genome rearrangements happening in the
course of evolution. The analysis of transpositions is further obfuscated by
intractability of the underlying computational problems.
We propose a computational method for estimating the rate of transpositions
in evolutionary scenarios between genomes. We applied our method to a set of
mammalian genomes and estimated the transpositions rate in mammalian evolution
to be around 0.26.Comment: Proceedings of the 3rd International Work-Conference on
Bioinformatics and Biomedical Engineering (IWBBIO), 2015. (to appear
Inapproximability of maximal strip recovery
In comparative genomic, the first step of sequence analysis is usually to
decompose two or more genomes into syntenic blocks that are segments of
homologous chromosomes. For the reliable recovery of syntenic blocks, noise and
ambiguities in the genomic maps need to be removed first. Maximal Strip
Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff
for reliably recovering syntenic blocks from genomic maps in the midst of noise
and ambiguities. Given genomic maps as sequences of gene markers, the
objective of \msr{d} is to find subsequences, one subsequence of each
genomic map, such that the total length of syntenic blocks in these
subsequences is maximized. For any constant , a polynomial-time
2d-approximation for \msr{d} was previously known. In this paper, we show that
for any , \msr{d} is APX-hard, even for the most basic version of the
problem in which all gene markers are distinct and appear in positive
orientation in each genomic map. Moreover, we provide the first explicit lower
bounds on approximating \msr{d} for all . In particular, we show that
\msr{d} is NP-hard to approximate within . From the other
direction, we show that the previous 2d-approximation for \msr{d} can be
optimized into a polynomial-time algorithm even if is not a constant but is
part of the input. We then extend our inapproximability results to several
related problems including \cmsr{d}, \gapmsr{\delta}{d}, and
\gapcmsr{\delta}{d}.Comment: A preliminary version of this paper appeared in two parts in the
Proceedings of the 20th International Symposium on Algorithms and Computation
(ISAAC 2009) and the Proceedings of the 4th International Frontiers of
Algorithmics Workshop (FAW 2010
Approximating Weighted Duo-Preservation in Comparative Genomics
Motivated by comparative genomics, Chen et al. [9] introduced the Maximum
Duo-preservation String Mapping (MDSM) problem in which we are given two
strings and from the same alphabet and the goal is to find a
mapping between them so as to maximize the number of duos preserved. A
duo is any two consecutive characters in a string and it is preserved in the
mapping if its two consecutive characters in are mapped to same two
consecutive characters in . The MDSM problem is known to be NP-hard and
there are approximation algorithms for this problem [3, 5, 13], but all of them
consider only the "unweighted" version of the problem in the sense that a duo
from is preserved by mapping to any same duo in regardless of their
positions in the respective strings. However, it is well-desired in comparative
genomics to find mappings that consider preserving duos that are "closer" to
each other under some distance measure [19]. In this paper, we introduce a
generalized version of the problem, called the Maximum-Weight Duo-preservation
String Mapping (MWDSM) problem that captures both duos-preservation and
duos-distance measures in the sense that mapping a duo from to each
preserved duo in has a weight, indicating the "closeness" of the two
duos. The objective of the MWDSM problem is to find a mapping so as to maximize
the total weight of preserved duos. In this paper, we give a polynomial-time
6-approximation algorithm for this problem.Comment: Appeared in proceedings of the 23rd International Computing and
Combinatorics Conference (COCOON 2017
Estudio de viabilidad de una planta de procesado de aletas de tiburón en Galicia
[Resumen] En este proyecto se lleva a cabo un estudio de viabilidad económica de la instalación de variadores de frecuencia, en los ventiladores, y un autómata programable en una planta de procesado de aletas de tiburón en Galicia.
En primer lugar, se analiza el contexto de las aletas de tiburón y su procesado,centrándose en proceso de secado, ya que es donde se centrará el estudio de viabilidad por la implantación de los dispositivos considerados, en este proceso.
A continuación, se realiza un estudio de las instalaciones y el caso concreto de la planta
objeto de estudio en este proyecto. En base a ello, se definen las distintas alternativas de
estudio que se considerarán.
Una vez definidas las alternativas, se elabora un presupuesto para cada una de ellas,
que determinará la inversión inicial necesaria.
Se calculará el ahorro de energía y de coste, de la implantación de las distintas
instalaciones considerada en cada alternativa, teniendo en cuenta las consideraciones
oportunas en cada caso concreto.
Por último, se lleva a cabo el análisis económico de las alternativas mediante el cálculo
del Valor Actual Neto (VAN), la Tasa Interna de Retorno (TIR) y el periodo de recuperación,
y se realizará un estudio de sensibilidad para determinar cuáles son los aspectos del
proyecto que más influyen sobre el resultado final.[Resumo] Neste proxecto lévase a cabo un estudo de viabilidade económica da instalación de variadores de frecuencia, nos ventiladores, e un autómata programable nunha planta de
procesado de aletas de tiburón en Galicia.
En primeiro lugar, analízase o contexto das aletas de tiburón e o seu procesado,
centrándose no proceso de secado, xa que é onde se centrará o estudo de viabilidade pola
implantación dos dispositivos considerados, neste proceso.
A continuación, realízase un estudo das instlacións e o caso concreto da planta obxecto
de estudio neste proxecto. En base a iso, defínense as distintas alternativas de estudo que
se considerarán.
Unha vez definidas as alternativas, elabórase un presuposto para cada unha delas, que
determinará o investimento inicial necesario.
Calcularase o aforro de enerxía e de coste, da implantación das distintas instlacións
consideradas en cada alternativa, tendo en conta as consideracións oportunas en cada caso
concreto.
Por último, lévase a cabo o análise económico das alternativas mediante o cálculo do
Valor Actual Neto (VAN), a TASA Interna de Retorno (TIR) e o periodo de recuperación, se
realizaráse un estudo de sensibilidade para determinar cales son os aspectos do proxecto
que máis inflúen sobre o resultado final.[Abstract]: In this project an economic feasibility study of the installation of frequency inverters, in the fans, and a programmable automaton in a shark fin processing plant in Galicia is carried out.
First, the context of shark fins and their processing is analyzed, focusing on the drying
process, since this is where the feasibility study will focus on the implementation of the
considered devices, in this process.
Next, a study of the facilities and the specific case of the plant object of study in this
project is carried out. Based on this, the different study alternatives that will be considered
are defined.
Once the alternatives have been defined, a budget is drawn up for each of them, which
will determine the initial investment required.
The energy and cost savings of the implementation of the different facilities considered in
each alternative will be calculated, taking into account the appropriate considerations in each
specific case.
Finally, the economic analysis of the aforementioned alternatives is carried out by
calculating the Net Present Value (NPV), the Internal Rate of Return (IRR) and the
payback period, and a sensitivity analysis is made to determine which aspects of the project have a greater influence on the final result.Traballo fin de grao (UDC.EPS). Enxeñaría en tecnoloxías industriais. Curso 2018/201
Not surgical technique, but etiology, contralateral MRI, prior surgery, and side of surgery determine seizure outcome after pediatric hemispherotomy
OBJECTIVE: We aimed to assess determinants of seizure outcome following pediatric hemispherotomy in a contemporary cohort. METHODS: We retrospectively analyzed the seizure outcomes of 457 children who underwent hemispheric surgery in five European epilepsy centers between 2000 and 2016. We identified variables related to seizure outcome through multivariable regression modeling with missing data imputation and optimal group matching, and we further investigated the role of surgical technique by Bayes factor (BF) analysis. RESULTS: One hundred seventy seven children (39%) underwent vertical and 280 children (61%) underwent lateral hemispherotomy. Three hundred forty-four children (75%) achieved seizure freedom at a mean follow-up of 5.1 years (range 1 to 17.1). We identified acquired etiology other than stroke (odds ratio [OR] 4.4, 95% confidence interval (CI) 1.1-18.0), hemimegalencephaly (OR 2.8, 95% CI 1.1-7.3), contralateral magnetic resonance imaging (MRI) findings (OR 5.5, 95% CI 2.7-11.1), prior resective surgery (OR 5.0, 95% CI 1.8-14.0), and left hemispherotomy (OR 2.3, 95% CI 1.3-3.9) as significant determinants of seizure recurrence. We found no evidence of an impact of the hemispherotomy technique on seizure outcome (the BF for a model including the hemispherotomy technique over the null model was 1.1), with comparable overall major complication rates for different approaches. SIGNIFICANCE: Knowledge about the independent determinants of seizure outcome following pediatric hemispherotomy will improve the counseling of patients and families. In contrast to previous reports, we found no statistically relevant difference in seizure-freedom rates between the vertical and horizontal hemispherotomy techniques when accounting for different clinical features between groups
A combinatorial algorithm for microbial consortia synthetic design
International audienceSynthetic biology has boomed since the early 2000s when it started being shown that it was possible to efficiently synthetize compounds of interest in a much more rapid and effective way by using other organisms than those naturally producing them. However, to thus engineer a single organism, often a microbe, to optimise one or a collection of metabolic tasks may lead to difficulties when attempting to obtain a production system that is efficient, or to avoid toxic effects for the recruited microorganism. The idea of using instead a microbial consortium has thus started being developed in the last decade. This was motivated by the fact that such consortia may perform more complicated functions than could single populations and be more robust to environmental fluctuations. Success is however not always guaranteed. In particular, establishing which consortium is best for the production of a given compound or set thereof remains a great challenge. This is the problem we address in this paper. We thus introduce an initial model and a method that enable to propose a consortium to synthetically produce compounds that are either exogenous to it, or are endogenous but where interaction among the species in the consortium could improve the production line. Synthetic biology has been defined by the European Commission as " the application of science, technology, and engineering to facilitate and accelerate the design, manufacture, and/or modification of genetic materials in living organisms to alter living or nonliving materials ". It is a field that has boomed since the early 2000s when in particular Jay Keasling showed that it was possible to efficiently synthetise a compound–artemisinic acid–which after a few more tricks then leads to an effective anti-malaria drug, artemisini
Treatment of electrical status epilepticus in sleep : A pooled analysis of 575 cases
OBJECTIVE: Epileptic encephalopathy with electrical status epilepticus in sleep (ESES) is a pediatric epilepsy syndrome with sleep-induced epileptic discharges and acquired impairment of cognition or behavior. Treatment of ESES is assumed to improve cognitive outcome. The aim of this study is to create an overview of the current evidence for different treatment regimens in children with ESES syndrome. METHODS: A literature search using PubMed and Embase was performed. Articles were selected that contain original treatment data of patients with ESES syndrome. Authors were contacted for additional information. Individual patient data were collected, coded, and analyzed using logistic regression analysis. The three predefined main outcome measures were improvement in cognitive function, electroencephalography (EEG) pattern, and any improvement (cognition or EEG). RESULTS: The literature search yielded 1,766 articles. After applying inclusion and exclusion criteria, 112 articles and 950 treatments in 575 patients could be analyzed. Antiepileptic drugs (AEDs, n = 495) were associated with improvement (i.e., cognition or EEG) in 49% of patients, benzodiazepines (n = 171) in 68%, and steroids (n = 166) in 81%. Surgery (n = 62) resulted in improvement in 90% of patients. In a subgroup analysis of patients who were consecutively reported (585 treatments in 282 patients), we found improvement in a smaller proportion treated with AEDs (34%), benzodiazepines (59%), and steroids (75%), whereas the improvement percentage after surgery was preserved (93%). Possible predictors of improved outcome were treatment category, normal development before ESES onset, and the absence of structural abnormalities. SIGNIFICANCE: Although most included studies were small and retrospective and their heterogeneity allowed analysis of only qualitative outcome data, this pooled analysis suggests superior efficacy of steroids and surgery in encephalopathy with ESES
A Comparison of Treatment-Seeking Behavioral Addiction Patients with and without Parkinson's Disease
The administration of dopaminergic medication to treat the symptoms of Parkinson's disease (PD) is associated with addictive behaviors and impulse control disorders. Little is known, however, on how PD patients differ from other patients seeking treatments for behavioral addictions. The aim of this study was to compare the characteristics of behavioral addiction patients with and without PD. N = 2,460 treatment-seeking men diagnosed with a behavioral addiction were recruited from a university hospital. Sociodemographic, impulsivity [Barratt Impulsiveness Scale (BIS-11)], and personality [Temperament and Character Inventory-Revised (TCI-R)] measures were taken upon admission to outpatient treatment. Patients in the PD group were older and had a higher prevalence of mood disorders than patients without PD. In terms of personality characteristics and impulsivity traits, PD patients appeared to present a more functional profile than PD-free patients with a behavioral addiction. Our results suggest that PD patients with a behavioral addiction could be more difficult to detect than their PD-free counterparts in behavioral addiction clinical setting due to their reduced levels of impulsivity and more standard personality traits. As a whole, this suggests that PD patients with a behavioral addiction may have different needs from PD-free behavioral addiction patients and that they could potentially benefit from targeted interventions
Triangle Counting in Dynamic Graph Streams
Estimating the number of triangles in graph streams using a limited amount of
memory has become a popular topic in the last decade. Different variations of
the problem have been studied, depending on whether the graph edges are
provided in an arbitrary order or as incidence lists. However, with a few
exceptions, the algorithms have considered {\em insert-only} streams. We
present a new algorithm estimating the number of triangles in {\em dynamic}
graph streams where edges can be both inserted and deleted. We show that our
algorithm achieves better time and space complexity than previous solutions for
various graph classes, for example sparse graphs with a relatively small number
of triangles. Also, for graphs with constant transitivity coefficient, a common
situation in real graphs, this is the first algorithm achieving constant
processing time per edge. The result is achieved by a novel approach combining
sampling of vertex triples and sparsification of the input graph. In the course
of the analysis of the algorithm we present a lower bound on the number of
pairwise independent 2-paths in general graphs which might be of independent
interest. At the end of the paper we discuss lower bounds on the space
complexity of triangle counting algorithms that make no assumptions on the
structure of the graph.Comment: New version of a SWAT 2014 paper with improved result
- …