149 research outputs found
Neural Modeling and Control of Diesel Engine with Pollution Constraints
The paper describes a neural approach for modelling and control of a
turbocharged Diesel engine. A neural model, whose structure is mainly based on
some physical equations describing the engine behaviour, is built for the
rotation speed and the exhaust gas opacity. The model is composed of three
interconnected neural submodels, each of them constituting a nonlinear
multi-input single-output error model. The structural identification and the
parameter estimation from data gathered on a real engine are described. The
neural direct model is then used to determine a neural controller of the
engine, in a specialized training scheme minimising a multivariable criterion.
Simulations show the effect of the pollution constraint weighting on a
trajectory tracking of the engine speed. Neural networks, which are flexible
and parsimonious nonlinear black-box models, with universal approximation
capabilities, can accurately describe or control complex nonlinear systems,
with little a priori theoretical knowledge. The presented work extends optimal
neuro-control to the multivariable case and shows the flexibility of neural
optimisers. Considering the preliminary results, it appears that neural
networks can be used as embedded models for engine control, to satisfy the more
and more restricting pollutant emission legislation. Particularly, they are
able to model nonlinear dynamics and outperform during transients the control
schemes based on static mappings.Comment: 15 page
ProbCD: enrichment analysis accounting for categorization uncertainty
As in many other areas of science, systems biology makes extensive use of statistical association and significance estimates in contingency tables, a type of categorical data analysis known in this field as enrichment (also over-representation or enhancement) analysis. In spite of efforts to create probabilistic annotations, especially in the Gene Ontology context, or to deal with uncertainty in high throughput-based datasets, current enrichment methods largely ignore this probabilistic information since they are mainly based on variants of the Fisher Exact Test. We developed an open-source R package to deal with probabilistic categorical data analysis, ProbCD, that does not require a static contingency table. The contingency table for
the enrichment problem is built using the expectation of a Bernoulli Scheme stochastic process given the categorization probabilities. An on-line interface was created to allow usage by non-programmers and is available at: http://xerad.systemsbiology.net/ProbCD/. We present an analysis framework and software tools to address the issue of uncertainty in categorical data analysis. In particular, concerning the enrichment analysis, ProbCD can accommodate: (i) the stochastic nature of the high-throughput experimental techniques and (ii) probabilistic gene annotation
Detecting microsatellites within genomes: significant variation among algorithms
<p>Abstract</p> <p>Background</p> <p>Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker).</p> <p>Results</p> <p>Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (<it>Saccharomyces cerevisiae</it>, <it>Neurospora crassa </it>and <it>Drosophila melanogaster</it>) spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp), regardless of motif.</p> <p>Conclusion</p> <p>Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions.</p
Unraveling a Neanderthal palimpsest from a zooarcheological and taphonomic perspective
Practically all archeological assemblages are palimpsests. In spite of the high temporal resolution of Abric RomanĂ site, level O, dated to around 55 ka, is not an exception. This paper focuses on a zooarcheological and taphonomic analysis of this level, paying special attention to spatial and temporal approaches. The main goal is to unravel the palimpsest at the finest possible level by using different methods and techniques, such as archeostratigraphy, anatomical and taxonomical identification, taphonomic analysis, faunal refits and tooth wear analysis. The results obtained are compared to ethnoarcheological data so as to interpret site structure. In addition, activities carried out over different time spans (from individual episodes to long-term behaviors) are detected, and their spatial extent is explored, allowing to do inferences on settlement dynamics. This leads us to discuss the temporal and spatial scales over which Neanderthals carried out different activities within the site, and how they can be studied through the archeological record
The cerebellar transcriptome during postnatal development of the Ts1Cje mouse, a segmental trisomy model for Down syndrome
The central nervous system of persons with Down syndrome presents cytoarchitectural abnormalities that likely result from gene-dosage effects affecting the expression of key developmental genes. To test this hypothesis, we have investigated the transcriptome of the cerebellum of the Ts1Cje mouse model of Down syndrome during postnatal development using microarrays and quantitative PCR (qPCR). Genes present in three copies were consistently overexpressed, with a mean ratio relative to euploid of 1.52 as determined by qPCR. Out of 63 three-copy genes tested, only five, nine and seven genes had ratios >2 or <1.2 at postnatal days 0 (P0), P15 and P30, respectively. This gene-dosage effect was associated with a dysregulation of the expression of some two-copy genes. Out of 8258 genes examined, the Ts1Cje/euploid ratios differed significantly from 1.0 for 406 (80 and 154 with ratios above 1.5 and below 0.7, respectively), 333 (11 above 1.5 and 55 below 0.7) and 246 genes (59 above 1.5 and 69 below 0.7) at P0, P15 and P30, respectively. Among the two-copy genes differentially expressed in the trisomic cerebellum, six homeobox genes, two belonging to the Notch pathway, were severely repressed. Overall, at P0, transcripts involved in cell differentiation and development were over-represented among the dysregulated genes, suggesting that cell differentiation and migration might be more altered than cell proliferation. Finally, global gene profiling revealed that transcription in Ts1Cje mice is more affected by the developmental changes than by the trisomic state, and that there is no apparent detectable delay in the postnatal development of the cerebellum of Ts1Cje mic
String Matching and 1d Lattice Gases
We calculate the probability distributions for the number of occurrences
of a given letter word in a random string of letters. Analytical
expressions for the distribution are known for the asymptotic regimes (i) (Gaussian) and such that is finite
(Compound Poisson). However, it is known that these distributions do now work
well in the intermediate regime . We show that the
problem of calculating the string matching probability can be cast into a
determining the configurational partition function of a 1d lattice gas with
interacting particles so that the matching probability becomes the
grand-partition sum of the lattice gas, with the number of particles
corresponding to the number of matches. We perform a virial expansion of the
effective equation of state and obtain the probability distribution. Our result
reproduces the behavior of the distribution in all regimes. We are also able to
show analytically how the limiting distributions arise. Our analysis builds on
the fact that the effective interactions between the particles consist of a
relatively strong core of size , the word length, followed by a weak,
exponentially decaying tail. We find that the asymptotic regimes correspond to
the case where the tail of the interactions can be neglected, while in the
intermediate regime they need to be kept in the analysis. Our results are
readily generalized to the case where the random strings are generated by more
complicated stochastic processes such as a non-uniform letter probability
distribution or Markov chains. We show that in these cases the tails of the
effective interactions can be made even more dominant rendering thus the
asymptotic approximations less accurate in such a regime.Comment: 44 pages and 8 figures. Major revision of previous version. The
lattice gas analogy has been worked out in full, including virial expansion
and equation of state. This constitutes the main part of the paper now.
Connections with existing work is made and references should be up to date
now. To be submitted for publicatio
Alteration of ribosome function upon 5-fluorouracil treatment favors cancer cell drug-tolerance.
Mechanisms of drug-tolerance remain poorly understood and have been linked to genomic but also to non-genomic processes. 5-fluorouracil (5-FU), the most widely used chemotherapy in oncology is associated with resistance. While prescribed as an inhibitor of DNA replication, 5-FU alters all RNA pathways. Here, we show that 5-FU treatment leads to the production of fluorinated ribosomes exhibiting altered translational activities. 5-FU is incorporated into ribosomal RNAs of mature ribosomes in cancer cell lines, colorectal xenografts, and human tumors. Fluorinated ribosomes appear to be functional, yet, they display a selective translational activity towards mRNAs depending on the nature of their 5'-untranslated region. As a result, we find that sustained translation of IGF-1R mRNA, which encodes one of the most potent cell survival effectors, promotes the survival of 5-FU-treated colorectal cancer cells. Altogether, our results demonstrate that "man-made" fluorinated ribosomes favor the drug-tolerant cellular phenotype by promoting translation of survival genes
GReEn: a tool for efficient compression of genome resequencing data
Research in the genomic sciences is confronted with the volume of sequencing and resequencing data increasing at a higher pace than that of data storage and communication resources, shifting a significant part of research budgets from the sequencing component of a project to the computational one. Hence, being able to efficiently store sequencing and resequencing data is a problem of paramount importance. In this article, we describe GReEn (Genome Resequencing Encoding), a tool for compressing genome resequencing data using a reference genome sequence. It overcomes some drawbacks of the recently proposed tool GRS, namely, the possibility of compressing sequences that cannot be handled by GRS, faster running times and compression gains of over 100-fold for some sequences. This tool is freely available for non-commercial use at ftp://ftp.ieeta.pt/âŒap/codecs/GReEn1.tar.gz
ContDist: a tool for the analysis of quantitative gene and promoter properties
<p>Abstract</p> <p>Background</p> <p>The understanding of how promoter regions regulate gene expression is complicated and far from being fully understood. It is known that histones' regulation of DNA compactness, DNA methylation, transcription factor binding sites and CpG islands play a role in the transcriptional regulation of a gene. Many high-throughput techniques exist nowadays which permit the detection of epigenetic marks and regulatory elements in the promoter regions of thousands of genes. However, so far the subsequent analysis of such experiments (e.g. the resulting gene lists) have been hampered by the fact that currently no tool exists for a detailed analysis of the promoter regions.</p> <p>Results</p> <p>We present ContDist, a tool to statistically analyze quantitative gene and promoter properties. The software includes approximately 200 quantitative features of gene and promoter regions for 7 commonly studied species. In contrast to "traditionally" ontological analysis which only works on qualitative data, all the features in the underlying annotation database are quantitative gene and promoter properties.</p> <p>Utilizing the strong focus on the promoter region of this tool, we show its usefulness in two case studies; the first on differentially methylated promoters and the second on the fundamental differences between housekeeping and tissue specific genes. The two case studies allow both the confirmation of recent findings as well as revealing previously unreported biological relations.</p> <p>Conclusion</p> <p>ContDist is a new tool with two important properties: 1) it has a strong focus on the promoter region which is usually disregarded by virtually all ontology tools and 2) it uses quantitative (continuously distributed) features of the genes and its promoter regions which are not available in any other tool. ContDist is available from <url>http://web.bioinformatics.cicbiogune.es/CD/ContDistribution.php</url></p
Core and edge modeling of JT-60SA H-mode highly radiative scenarios using SOLEDGE3XâEIRENE and METIS codes
In its first phase of exploitation, JT-60SA will be equipped with an inertially cooled divertor, which can sustain heat loads of 10Â MW/m2 on the targets for a few seconds, which is much shorter than the intended discharge duration. Therefore, in order to maximize the duration of discharges, it is crucial to develop operational scenarios with a high radiated fraction in the plasma edge region without unacceptably compromising the scenario performance. In this study, the core and edge conditions of unseeded and neon-seeded deuterium H-mode scenarios in JT-60SA were investigated using METIS and SOLEDGE3XâEIRENE codes. The aim was to determine whether, and under which operational conditions, it would be possible to achieve heat loads at the targets significantly lower than 10Â MW/m2 and potentially establish a divertor-detached regime while keeping favorable plasma core conditions. In first analysis, an investigation of the edge parameter space of unseeded scenarios was carried out. Simulations at an intermediate edge power of 15Â MW indicate that, without seeded impurities, the heat loads at the targets are higher than 10Â MW/m2 in attached cases, and achieving detachment is challenging, requiring upstream electron densities at least above 4 Ă 1019Â mâ3. This points toward the need for impurity injection during the first period of exploitation of the machine. Therefore, neon seeding simulations were carried out, performing a seeding rate scan and an injected power scan while keeping the upstream electron density at the separatrix at 3 Ă 1019Â mâ3. They show that at 15Â MW of power injected into the edge plasma, the inner target is easily detached and presents low heat loads when neon is injected. However, at the outer target, the heat fluxes are not lowered below 10Â MW/m2, even when the power losses in the edge plasma are equal to 50% of the power crossing the separatrix. Therefore, the tokamak will probably need to be operated in a deep detached regime in its first phase of exploitation for discharges longer than a few seconds. In the framework of coreâedge integrated modeling, using METIS, the power radiated in the core was computed for the most interesting cases
- âŠ