1,847 research outputs found

    Privacy Regulation and e-Research

    Get PDF
    The Office of the Privacy Commissioner appreciates the kind invitation from the Law Faculty at Queensland University of Technology to present at the 2007 Legal Framework for e-Research Conference. This legal framework project coincides with a key period for privacy regulation in Australia, most significantly due to the current inquiry into privacy law being conducted by the Australian Law Reform Commission (ALRC). At the same time, public policy is increasingly examining how best to facilitate research interests through the use of personal information. The Office notes, for example, the National Data Network initiative,2 as well as the inquiry conducted by the Productivity Commission3 into the role of research in Australia, to which the Office made a submission.4 In this chapter I aim to provide a brief overview of federal information privacy regulation, particularly as it applies to health and medical research, as well as to thumbnail possible opportunities for reform that may emerge from the current ALRC inquiry. These opportunities are discussed in detail in the Office’s submission to that inquiry, available from our website.

    Preservice teachers’ adaptations to tensions associated with the edTPA during its early implementation in New York and Washington states

    Get PDF
    The edTPA is a teaching performance assessment (TPA) that the states of New York and Washington implemented as a licensure requirement in 2013. While TPAs are not new modes of assessment, New York and Washington are the first states to use the edTPA specifically as a compulsory, high-stakes policy lever in an effort to strengthen the quality and accountability of teachers and teacher educators. This study examines 24 New York and Washington teaching candidates’ experiences with the edTPA during its first year of consequential use for state certification. The data, drawn from qualitative interviews that were part of a larger mixed-methods study, reveal that preservice teachers had to mediate several tensions associated with the edTPA’s dual role as a formative assessment tool and a licensure mechanism. In this paper, we identify those tensions, describe candidates’ efforts to mediate them, and discuss the extent to which that mediation process may or may not contribute to the improvement of teachers’ practices. Given the edTPA’s positioning in a policy context – specifically, the potential for the assessment’s locus of control, high stakes, and opaque rating process to distort the procedures it is intended to measure – the paper concludes with recommendations for teacher education programs aimed at capitalizing on the edTPA’s benefits and mitigating its unproductive tensions.

    Genomic alterations in primary gastric adenocarcinomas correlate with clinicopathological characteristics and survival.

    Get PDF
    Background & aimsPathogenesis of gastric cancer is driven by an accumulation of genetic changes that to a large extent occur at the chromosomal level. In order to investigate the patterns of chromosomal aberrations in gastric carcinomas, we performed genome-wide microarray based comparative genomic hybridisation (microarray CGH). With this recently developed technique chromosomal aberrations can be studied with high resolution and sensitivity.MethodsArray CGH was applied to a series of 35 gastric adenocarcinomas using a genome-wide scanning array with 2275 BAC and P1 clones spotted in triplicate. Each clone contains at least one STS for linkage to the sequence of the human genome. These arrays provide an average resolution of 1.4 Mb across the genome. DNA copy number changes were correlated with clinicopathological tumour characteristics as well as survival.ResultsAll thirty-five cancers showed chromosomal aberrations and 16 of the 35 tumours showed one or more amplifications. The most frequent aberrations are gains of 8q24.2, 8q24.1, 20q13.12, 20q13.2, 7p11.2, 1q32.3, 8p23.1-p23.3, losses of 5q14.1, 18q22.1, 19p13.12-p13.3, 9p21.3-p24.3, 17p13.1-p13.3, 13q31.1, 16q22.1, 21q21.3, and amplifications of 7q21-q22, and 12q14.1-q21.1. These aberrations were correlated to clinicopathological characteristics and survival. Gain of 1q32.3 was significantly correlated with lymph node status (p=0.007). Tumours with loss of 18q22.1, as well as tumours with amplifications were associated with poor survival (p=0.02, both).ConclusionsMicroarray CGH has revealed several chromosomal regions that have not been described before in gastric cancer at this frequency and resolution, such as amplification of at 7q21-q22 and 12q14.1-q21.1, as well gains at 1q32.3, 7p11.2, and losses at 13q13.1. Interestingly, gain of 1q32.3 and loss of 18q22.1 are associated with a bad prognosis indicating that these regions could harbour gene(s) that may determine aggressive tumour behaviour and poor clinical outcome

    Using the Pareto principle in genome-wide breeding value estimation

    Get PDF
    Genome-wide breeding value (GWEBV) estimation methods can be classified based on the prior distribution assumptions of marker effects. Genome-wide BLUP methods assume a normal prior distribution for all markers with a constant variance, and are computationally fast. In Bayesian methods, more flexible prior distributions of SNP effects are applied that allow for very large SNP effects although most are small or even zero, but these prior distributions are often also computationally demanding as they rely on Monte Carlo Markov chain sampling. In this study, we adopted the Pareto principle to weight available marker loci, i.e., we consider that x% of the loci explain (100 - x)% of the total genetic variance. Assuming this principle, it is also possible to define the variances of the prior distribution of the 'big' and 'small' SNP. The relatively few large SNP explain a large proportion of the genetic variance and the majority of the SNP show small effects and explain a minor proportion of the genetic variance. We name this method MixP, where the prior distribution is a mixture of two normal distributions, i.e. one with a big variance and one with a small variance. Simulation results, using a real Norwegian Red cattle pedigree, show that MixP is at least as accurate as the other methods in all studied cases. This method also reduces the hyper-parameters of the prior distribution from 2 (proportion and variance of SNP with big effects) to 1 (proportion of SNP with big effects), assuming the overall genetic variance is known. The mixture of normal distribution prior made it possible to solve the equations iteratively, which greatly reduced computation loads by two orders of magnitude. In the era of marker density reaching million(s) and whole-genome sequence data, MixP provides a computationally feasible Bayesian method of analysis

    A Review on Quantitative Models for Sustainable Food Logistics Management

    Get PDF
    The last two decades food logistics systems have seen the transition from a focus on traditional supply chain management to food supply chain management, and successively, to sustainable food supply chain management. The main aim of this study is to identify key logistical aims in these three phases and analyse currently available quantitative models to point out modelling challenges in sustainable food logistics management (SFLM). A literature review on quantitative studies is conducted and also qualitative studies are consulted to understand the key logistical aims more clearly and to identify relevant system scope issues. Results show that research on SFLM has been progressively developing according to the needs of the food industry. However, the intrinsic characteristics of food products and processes have not yet been handled properly in the identified studies. The majority of the works reviewed have not contemplated on sustainability problems, apart from a few recent studies. Therefore, the study concludes that new and advanced quantitative models are needed that take specific SFLM requirements from practice into consideration to support business decisions and capture food supply chain dynamics

    Genetic support for a quantitative trait nucleotide in the ABCG2 gene affecting milk composition of dairy cattle

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Our group has previously identified a quantitative trait locus (QTL) affecting fat and protein percentages on bovine chromosome 6, and refined the QTL position to a 420-kb interval containing six genes. Studies performed in other cattle populations have proposed polymorphisms in two different genes (<it>ABCG2 </it>and <it>OPN</it>) as the underlying functional QTL nucleotide. Due to these conflicting results, we have included these QTNs, together with a large collection of new SNPs produced from PCR sequencing, in a dense marker map spanning the QTL region, and reanalyzed the data using a combined linkage and linkage disequilibrium approach.</p> <p>Results</p> <p>Our results clearly exclude the <it>OPN </it>SNP (<it>OPN_3907</it>) as causal site for the QTL. Among 91 SNPs included in the study, the <it>ABCG2 </it>SNP (<it>ABCG2_49</it>) is clearly the best QTN candidate. The analyses revealed the presence of only one QTL for the percentage traits in the tested region. This QTL was completely removed by correcting the analysis for <it>ABCG2_49</it>. Concordance between the sires' marker genotypes and segregation status for the QTL was found for <it>ABCG2_49 </it>only. The C allele of <it>ABCG2_49 </it>is found in a marker haplotype that has an extremely negative effect on fat and protein percentages and positive effect on milk yield. Of the 91 SNPs, <it>ABCG2_49 </it>was the only marker in perfect linkage disequilibrium with the QTL.</p> <p>Conclusion</p> <p>Based on our results, OPN_3907 can be excluded as the polymorphism underlying the QTL. The results of this and other papers strongly suggest the [A/C] mutation in <it>ABCG2_49 </it>as the causal mutation, although the possibility that <it>ABCG2_49 </it>is only a marker in perfect LD with the true mutation can not be completely ruled out.</p

    Genotype imputation for the prediction of genomic breeding values in non-genotyped and low-density genotyped individuals

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is wide interest in calculating genomic breeding values (GEBVs) in livestock using dense, genome-wide SNP data. The general framework for genomic selection assumes all individuals are genotyped at high-density, which may not be true in practice. Methods to add additional genotypes for individuals not genotyped at high density have the potential to increase GEBV accuracy with little or no additional cost. In this study a long haplotype library was created using a long range phasing algorithm and used in combination with segregation analysis to impute dense genotypes for non-genotyped dams in the training dataset (S1) and for non-genotyped or low-density genotyped individuals in the prediction dataset (S2), using the 14<sup>th</sup> QTL-MAS Workshop dataset. Alternative low-density scenarios were evaluated for accuracy of imputed genotypes and prediction of GEBVs.</p> <p>Results</p> <p>In S1, females in the training population were not genotyped and prediction individuals were either not genotyped or genotyped at low-density (evenly spaced at 2, 5 or 10 Mb). The proportion of correctly imputed genotypes for training females did not change when genotypes were added for individuals in the prediction set whereas the number of correctly imputed genotypes in the prediction set increased slightly (S1). The S2 scenario assumed the complete training set was genotyped for all SNPs and the prediction set was not genotyped or genotyped at low-density. The number of correctly imputed genotypes increased with genotyping density in the prediction set. Accuracy of genomic breeding values for the prediction set in each scenario were the correlation of GEBVs with true breeding values and were used to evaluate the potential loss in accuracy with reduced genotyping. For both S1 and S2 the GEBV accuracies were similar when the prediction set was not genotyped and increased with the addition of low-density genotypes, with the increase larger for S2 than S1.</p> <p>Conclusions</p> <p>Genotype imputation using a long haplotype library and segregation analysis is promising for application in sparsely-genotyped pedigrees. The results of this study suggest that dense genotypes can be imputed for selection candidates with some loss in genomic breeding value accuracy, but with levels of accuracy higher than traditional BLUP estimated breeding values. Accurate genotype imputation would allow for a single low-density SNP panel to be used across traits.</p

    Removing data and using metafounders alleviates biases for all traits in Lacaune dairy sheep predictions

    Get PDF
    Bias in dairy genetic evaluations, when it exists, has to be understood and properly addressed. The origin of biases is not always clear. We analyzed 40 yr of records from the Lacaune dairy sheep breeding program to evaluate the extent of bias, assess possible corrections, and emit hypotheses on its origin. The data set included 7 traits (milk yield, fat and protein contents, somatic cell score, teat angle, udder cleft, and udder depth) with records from 600,000 to 5 million depending on the trait,-1,900,000 animals, and-5,900 genotyped elite artificial insemination rams. For the-8% animals with missing sire, we fit 25 unknown parent groups. We used the linear regression method to compare "partial" and "whole" predictions of young rams before and after progeny testing, with 7 cut-off points, and we obtained estimates of their bias, (over)dispersion, and accuracy in early proofs. We tried (1) several scenarios as follows: multiple or single trait, the "official" (routine) evalua-tion, which is a mixture of both single and multiple trait, and "deletion" of data before 1990; and (2) sev-eral models as follows: BLUP and single-step genomic (SSG)BLUP with fixed unknown parent groups or metafounders, where, for metafounders, their relation-ship matrix gamma was estimated using either a model for inbreeding trend, or base allele frequencies esti-mated by peeling. The estimate of gamma obtained by modeling the inbreeding trend resulted in an estimated increase of inbreeding, based on markers, faster than the pedigree-based one. The estimated genetic trends were similar for most models and scenarios across all traits, but were shrunken when gamma was estimated by peeling. This was due to shrinking of the estimates of metafounders in the latter case. Across scenarios, all traits showed bias, generally as an overestimate of genetic trend for milk yield and an underestimate for the other traits. As for the slope, it showed overdisper-sion of estimated breeding values for all traits. Using multiple-trait models slightly reduced the overestimate of genetic trend and the overdispersion, as did including genomic information (i.e., SSGBLUP) when the gam-ma matrix was estimated by the model for inbreeding trend. However, only deletion of historical data before 1990 resulted in elimination of both kind of biases. The SSGBLUP resulted in more accurate early proofs than BLUP for all traits. We considered that a snowball ef-fect of small errors in each genetic evaluation, combined with selection, may have resulted in biased evaluations. Improving statistical methods reduced some bias but not all, and a simple solution for this data set was to remove historical records
    • 

    corecore