449 research outputs found

    Magnetic Exchange Couplings from Noncollinear Spin Density Functional Perturbation Theory

    Full text link
    We propose a method for the evaluation of magnetic exchange couplings based on noncollinear spin-density functional calculations. The method employs the second derivative of the total Kohn-Sham energy of a single reference state, in contrast to approximations based on Kohn-Sham total energy differences. The advantage of our approach is twofold: It provides a physically motivated picture of the transition from a low-spin to a high-spin state, and it utilizes a perturbation scheme for the evaluation of magnetic exchange couplings. The latter simplifies the way these parameters are predicted using first-principles: It avoids the non-trivial search for different spin-states that needs to be carried out in energy difference methods and it opens the possibility of "black-boxifying" the extraction of exchange couplings from density functional theory calculations. We present proof of concept calculations of magnetic exchange couplings in the H--He--H model system and in an oxovanadium bimetallic complex where the results can be intuitively rationalized.Comment: J.Chem. Phys. (accepted

    Constrained multivariate association with longitudinal phenotypes

    Get PDF
    The incorporation of longitudinal data into genetic epidemiological studies has the potential to provide valuable information regarding the effect of time on complex disease etiology. Yet, the majority of research focuses on variables collected from a single time point. This aim of this study was to test for main effects on a quantitative trait across time points using a constrained maximum-likelihood measured genotype approach. This method simultaneously accounts for all repeat measurements of a phenotype in families. We applied this method to systolic blood pressure (SBP) measurements from three time points using the Genetic Analysis Workshop 19 (GAW19) whole-genome sequence family simulated data set and 200 simulated replicates. Data consisted of 849 individuals from 20 extended Mexican American pedigrees. Comparisons were made among 3 statistical approaches: (a) constrained, where the effect of a variant or gene region on the mean trait value was constrained to be equal across all measurements; (b) unconstrained, where the variant or gene region effect was estimated separately for each time point; and (c) the average SBP measurement from three time points. These approaches were run for nine genetic variants with known effect sizes (\u3e0.001) for SBP variability and a known gene-centric kernel (MAP4)-based test under the GAW19 simulation model across 200 replicates

    Forecasting seasonal time series with computational intelligence: on recent methods and the potential of their combinations

    Get PDF
    Accurate time series forecasting is a key issue to support individual and or- ganizational decision making. In this paper, we introduce novel methods for multi-step seasonal time series forecasting. All the presented methods stem from computational intelligence techniques: evolutionary artificial neu- ral networks, support vector machines and genuine linguistic fuzzy rules. Performance of the suggested methods is experimentally justified on sea- sonal time series from distinct domains on three forecasting horizons. The most important contribution is the introduction of a new hybrid combination using linguistic fuzzy rules and the other computational intelligence methods. This hybrid combination presents competitive forecasts, when compared with the popular ARIMA method. Moreover, such hybrid model is more easy to interpret by decision-makers when modeling trended series.The research was supported by the European Regional Development Fund in the IT4Innovations Centre of Excellence project (CZ.1.05/1.1.00/02.0070). Furthermore, we gratefully acknowledge partial support of the project KON- TAKT II - LH12229 of MSˇMT CˇR

    Heritability and genetic associations of triglyceride and HDL-C levels using pedigree-based and empirical kinships

    Get PDF
    The heritability of a phenotype is an estimation of the percent of variance in that phenotype that is attributable to additive genetic factors. Heritability is optimally estimated in family-based sample populations. Traditionally, this involves use of a pedigree-based kinship coefficient generated from the collected genealogical relationships between family members. An alternative, when dense genotype data are available, is to directly measure the empirical kinship between samples. This study compares the use of pedigree and empirical kinships in the GAW20 data set. Two phenotypes were assessed: triglyceride levels and high-density lipoprotein cholesterol (HDL-C) levels pre- and postintervention with the cholesterol-reducing drug fenofibrate. Using SOLAR (Sequential Oligogenic Linkage Analysis Routines), pedigree-based kinships and empirically calculated kinships (using IBDLD and LDAK) were used to calculate phenotype heritability. In addition, a genome-wide association study was conducted using each kinship model for each phenotype to identify genetic variants significantly associated with phenotypic variation. The variant rs247617 was significantly associated with HDL-C levels both pre- and post-fenofibrate intervention. Overall, the phenotype heritabilities calculated using pedigree based kinships or either of the empirical kinships generated using IBDLD or LDAK were comparable. Phenotype heritabilities estimated from empirical kinships generated using IBDLD were closest to the pedigree-based estimations. Given that there was not an appreciable amount of unknown relatedness between the pedigrees in this data set, a large increase in heritability in using empirical kinship was not expected, and our calculations support this. Importantly, these results demonstrate that when sufficient genotypic data are available, empirical kinship estimation is a practical alternative to using pedigree-based kinships

    Reliability of genomic predictions of complex human phenotypes

    Get PDF
    Genome-wide association studies have helped us identify a wealth of genetic variants associated with complex human phenotypes. Because most variants explain a small portion of the total phenotypic variation, however, marker-based studies remain limited in their ability to predict such phenotypes. Here, we show how modern statistical genetic techniques borrowed from animal breeding can be employed to increase the accuracy of genomic prediction of complex phenotypes and the power of genetic mapping studies. Specifically, using the triglyceride data of the GAW20 data set, we apply genomic-best linear unbiased prediction (G-BLUP) methods to obtain empirical genetic values (EGVs) for each triglyceride phenotype and each individual. We then study 2 different factors that influence the prediction accuracy of G-BLUP for the analysis of human data: (a) the choice of kinship matrix, and (b) the overall level of relatedness. The resulting genetic values represent the total genetic component for the phenotype of interest and can be used to represent a trait without its environmental component. Finally, using empirical data, we demonstrate how this method can be used to increase the power of genetic mapping studies. In sum, our results show that dense genome-wide data can be used in a wider scope than previously anticipated

    Forecasting seasonal time series with computational intelligence: contribution of a combination of distinct methods

    Get PDF
    Accurate time series forecasting are important for displaying the manner in which the past contin- ues to affect the future and for planning our day to day activities. In recent years, a large litera- ture has evolved on the use of computational in- telligence in many forecasting applications. In this paper, several computational intelligence techniques (genetic algorithms, neural networks, support vec- tor machine, fuzzy rules) are combined in a distinct way to forecast a set of referenced time series. Fore- casting performance is compared to the a standard and method frequently used in practice.Project DAR 1M0572 of the MŠMT ČR

    Methodological proposal for the identification of incremental innovations in SMEs

    Get PDF
    Purpose: To propose a methodology for identifying incremental innovations to find sustainable competitive advantages of organizations, especially in SMEs, through the use of Money Makers. Design/Methodology/Approach: A study was carried out by means of the inductive-deductive method while a theoretical framework that guides the analysis of the phenomenon is approached. During the development of the study, the use of techniques such as technological surveillance, technological mapping, and technological scanning were used. Findings: The proposed Money Makers identification model has been executed in 4 stages, where the search for technological Money Makers and the process improvements that impact the business model of companies are oriented. In the first stage, the baseline is constructed by evaluating the current state of the company's technologies. Subsequently, trends at the level of Money Makers are identified and detected, and the procedure for their selection depending on the area of the value chain where it is considered that it can be implemented. Practical Implications: Although there were several studies in the same issue our model can be used for practical implications because of its simplicity. Originality/Value: The study proposes a fairly easy methodological proposal for the identification of incremantal innovations in SMEs.peer-reviewe
    corecore