73 research outputs found
Microlensing towards M31 with MDM data
We report the final analysis of a search for microlensing events in the
direction of the Andromeda galaxy, which aimed to probe the MACHO composition
of the M31 halo using data collected during the 1998-99 observational campaign
at the MDM observatory. In a previous paper, we discussed the results from a
first set of observations. Here, we deal with the complete data set, and we
take advantage of some INT observations in the 1999-2000 seasons. This merging
of data sets taken by different instruments turns out to be very useful, the
study of the longer baseline available allowing us to test the uniqueness
characteristic of microlensing events. As a result, all the candidate
microlensing events previously reported turn out to be variable stars. We
further discuss a selection based on different criteria, aimed at the detection
of short--duration events. We find three candidates whose positions are
consistent with self--lensing events, although the available data do not allow
us to conclude unambiguously that they are due to microlensing.Comment: Accepted for publication in Astronomy and Astrophysic
Beating the random walk: a performance assessment of long-term interest rate forecasts
This article assesses the performance of a number of long-term interest rate forecast approaches, namely time series models, structural economic models, expert forecasts and combinations thereof. The predictive performance of these approaches is compared using outside sample forecast errors, where a random walk forecast acts as benchmark. It is found that for five major Organization for Economic Co-operation and Development (OECD) countries, namely the US, Germany, UK, The Netherlands and Japan, the other forecasting approaches do not outperform the random walk on a 3-month forecast horizon. On a 12-month forecast horizon, the random walk model is outperformed by a model that combines economic data and expert forecasts. Several methods of combination are considered: equal weights, optimized weights and weights based on the forecast error. It seems that the additional information contents of the structural models and expert knowledge adds considerably to the performance of forecasting 12 months ahead. © 2013 Taylor & Francis
The selection landscape and genetic legacy of ancient Eurasians
The Holocene (beginning around 12,000 years ago) encompassed some of the most significant changes in human evolution, with far-reaching consequences for the dietary, physical and mental health of present-day populations. Using a dataset of more than 1,600 imputed ancient genomes 1, we modelled the selection landscape during the transition from hunting and gathering, to farming and pastoralism across West Eurasia. We identify key selection signals related to metabolism, including that selection at the FADS cluster began earlier than previously reported and that selection near the LCT locus predates the emergence of the lactase persistence allele by thousands of years. We also find strong selection in the HLA region, possibly due to increased exposure to pathogens during the Bronze Age. Using ancient individuals to infer local ancestry tracts in over 400,000 samples from the UK Biobank, we identify widespread differences in the distribution of Mesolithic, Neolithic and Bronze Age ancestries across Eurasia. By calculating ancestry-specific polygenic risk scores, we show that height differences between Northern and Southern Europe are associated with differential Steppe ancestry, rather than selection, and that risk alleles for mood-related phenotypes are enriched for Neolithic farmer ancestry, whereas risk alleles for diabetes and Alzheimer’s disease are enriched for Western hunter-gatherer ancestry. Our results indicate that ancient selection and migration were large contributors to the distribution of phenotypic diversity in present-day Europeans
Microlensing search towards M31
We present the first results of the analysis of data collected during the
1998-99 observational campaign at the 1.3 meter McGraw-Hill Telescope, towards
the Andromeda galaxy (M31), aimed to the detection of gravitational
microlensing effects as a probe of the presence of dark matter in our and in
M31 halo. The analysis is performed using the pixel lensing technique, which
consists in the study of flux variations of unresolved sources and has been
proposed and implemented by the AGAPE collaboration. We carry out a shape
analysis by demanding that the detected flux variations be achromatic and
compatible with a Paczynski light curve. We apply the Durbin-Watson hypothesis
test to the residuals. Furthermore, we consider the background of variables
sources. Finally five candidate microlensing events emerge from our selection.
Comparing with the predictions of a Monte Carlo simulation, assuming a standard
spherical model for the M31 and Galactic haloes, and typical values for the
MACHO mass, we find that our events are only marginally consistent with the
distribution of observable parameters predicted by the simulation.Comment: 13 pages, To appear in A&
Prediction of 3D grinding temperature field based on meshless method considering infinite element
© 2018, Springer-Verlag London Ltd., part of Springer Nature. A three-dimensional numerical model to calculate the grinding temperature field distribution is presented. The finite block method, which is developed from meshless method, is used to deal with the stationary and the transient heat conduction problems in this paper. The influences of workpiece feed velocity, cooling coefficient, and the depth of cut on temperature distribution are considered. The model with temperature-dependent thermal conductivity and specific heat is presented. The Lagrange partial differential matrix from the heat transfer governing equation is obtained by using Lagrange series and mapping technique. The grinding wheel-workpiece contact area is assumed as a moving distributed square heat source. The Laplace transformation method and Durbin’s inverse technique are employed in the transient heat conduction analysis. The results of the developed model are compared with others’ finite element method solutions and analytical solutions where a good agreement is demonstrated. And the finite block method was proved a better convergence and accuracy than finite element method by comparing the ABAQUS results. In addition, the three-dimensional infinite element is introduced to perform the thermal analysis, and there is a great of advantages in the simulation of large boundary problems.The work was funded by China Scholarship Council, the Fundamental Research Funds for the Central Universities (N160306006), National Natural Science Foundation of China (51275084), and Science and technology project of Shenyang (18006001)
Modularization of biochemical networks based on classification of Petri net t-invariants
<p>Abstract</p> <p>Background</p> <p>Structural analysis of biochemical networks is a growing field in bioinformatics and systems biology. The availability of an increasing amount of biological data from molecular biological networks promises a deeper understanding but confronts researchers with the problem of combinatorial explosion. The amount of qualitative network data is growing much faster than the amount of quantitative data, such as enzyme kinetics. In many cases it is even impossible to measure quantitative data because of limitations of experimental methods, or for ethical reasons. Thus, a huge amount of qualitative data, such as interaction data, is available, but it was not sufficiently used for modeling purposes, until now. New approaches have been developed, but the complexity of data often limits the application of many of the methods. Biochemical Petri nets make it possible to explore static and dynamic qualitative system properties. One Petri net approach is model validation based on the computation of the system's invariant properties, focusing on t-invariants. T-invariants correspond to subnetworks, which describe the basic system behavior.</p> <p>With increasing system complexity, the basic behavior can only be expressed by a huge number of t-invariants. According to our validation criteria for biochemical Petri nets, the necessary verification of the biological meaning, by interpreting each subnetwork (t-invariant) manually, is not possible anymore. Thus, an automated, biologically meaningful classification would be helpful in analyzing t-invariants, and supporting the understanding of the basic behavior of the considered biological system.</p> <p>Methods</p> <p>Here, we introduce a new approach to automatically classify t-invariants to cope with network complexity. We apply clustering techniques such as UPGMA, Complete Linkage, Single Linkage, and Neighbor Joining in combination with different distance measures to get biologically meaningful clusters (t-clusters), which can be interpreted as modules. To find the optimal number of t-clusters to consider for interpretation, the cluster validity measure, Silhouette Width, is applied.</p> <p>Results</p> <p>We considered two different case studies as examples: a small signal transduction pathway (pheromone response pathway in <it>Saccharomyces cerevisiae</it>) and a medium-sized gene regulatory network (gene regulation of Duchenne muscular dystrophy). We automatically classified the t-invariants into functionally distinct t-clusters, which could be interpreted biologically as functional modules in the network. We found differences in the suitability of the various distance measures as well as the clustering methods. In terms of a biologically meaningful classification of t-invariants, the best results are obtained using the Tanimoto distance measure. Considering clustering methods, the obtained results suggest that UPGMA and Complete Linkage are suitable for clustering t-invariants with respect to the biological interpretability.</p> <p>Conclusion</p> <p>We propose a new approach for the biological classification of Petri net t-invariants based on cluster analysis. Due to the biologically meaningful data reduction and structuring of network processes, large sets of t-invariants can be evaluated, allowing for model validation of qualitative biochemical Petri nets. This approach can also be applied to elementary mode analysis.</p
More Than 1,001 Problems with Protein Domain Databases: Transmembrane Regions, Signal Peptides and the Issue of Sequence Homology
Large-scale genome sequencing gained general importance for life science because functional annotation of otherwise experimentally uncharacterized sequences is made possible by the theory of biomolecular sequence homology. Historically, the paradigm of similarity of protein sequences implying common structure, function and ancestry was generalized based on studies of globular domains. Having the same fold imposes strict conditions over the packing in the hydrophobic core requiring similarity of hydrophobic patterns. The implications of sequence similarity among non-globular protein segments have not been studied to the same extent; nevertheless, homology considerations are silently extended for them. This appears especially detrimental in the case of transmembrane helices (TMs) and signal peptides (SPs) where sequence similarity is necessarily a consequence of physical requirements rather than common ancestry. Thus, matching of SPs/TMs creates the illusion of matching hydrophobic cores. Therefore, inclusion of SPs/TMs into domain models can give rise to wrong annotations. More than 1001 domains among the 10,340 models of Pfam release 23 and 18 domains of SMART version 6 (out of 809) contain SP/TM regions. As expected, fragment-mode HMM searches generate promiscuous hits limited to solely the SP/TM part among clearly unrelated proteins. More worryingly, we show explicit examples that the scores of clearly false-positive hits, even in global-mode searches, can be elevated into the significance range just by matching the hydrophobic runs. In the PIR iProClass database v3.74 using conservative criteria, we find that at least between 2.1% and 13.6% of its annotated Pfam hits appear unjustified for a set of validated domain models. Thus, false-positive domain hits enforced by SP/TM regions can lead to dramatic annotation errors where the hit has nothing in common with the problematic domain model except the SP/TM region itself. We suggest a workflow of flagging problematic hits arising from SP/TM-containing models for critical reconsideration by annotation users
Characterising chromosome rearrangements: recent technical advances in molecular cytogenetics
Genomic rearrangements can result in losses, amplifications, translocations and inversions of DNA fragments thereby modifying genome architecture, and potentially having clinical consequences. Many genomic disorders caused by structural variation have initially been uncovered by early cytogenetic methods. The last decade has seen significant progression in molecular cytogenetic techniques, allowing rapid and precise detection of structural rearrangements on a whole-genome scale. The high resolution attainable with these recently developed techniques has also uncovered the role of structural variants in normal genetic variation alongside single-nucleotide polymorphisms (SNPs). We describe how array-based comparative genomic hybridisation, SNP arrays, array painting and next-generation sequencing analytical methods (read depth, read pair and split read) allow the extensive characterisation of chromosome rearrangements in human genomes
- …