80 research outputs found

    Atenolol versus losartan in children and young adults with Marfan's syndrome

    Get PDF
    BACKGROUND : Aortic-root dissection is the leading cause of death in Marfan's syndrome. Studies suggest that with regard to slowing aortic-root enlargement, losartan may be more effective than beta-blockers, the current standard therapy in most centers. METHODS : We conducted a randomized trial comparing losartan with atenolol in children and young adults with Marfan's syndrome. The primary outcome was the rate of aortic-root enlargement, expressed as the change in the maximum aortic-root-diameter z score indexed to body-surface area (hereafter, aortic-root z score) over a 3-year period. Secondary outcomes included the rate of change in the absolute diameter of the aortic root; the rate of change in aortic regurgitation; the time to aortic dissection, aortic-root surgery, or death; somatic growth; and the incidence of adverse events. RESULTS : From January 2007 through February 2011, a total of 21 clinical centers enrolled 608 participants, 6 months to 25 years of age (mean [+/- SD] age, 11.5 +/- 6.5 years in the atenolol group and 11.0 +/- 6.2 years in the losartan group), who had an aorticroot z score greater than 3.0. The baseline-adjusted rate of change (+/- SE) in the aortic-root z score did not differ significantly between the atenolol group and the losartan group (-0.139 +/- 0.013 and -0.107 +/- 0.013 standard-deviation units per year, respectively; P = 0.08). Both slopes were significantly less than zero, indicating a decrease in the degree of aortic-root dilatation relative to body-surface area with either treatment. The 3-year rates of aortic-root surgery, aortic dissection, death, and a composite of these events did not differ significantly between the two treatment groups. CONCLUSIONS : Among children and young adults with Marfan's syndrome who were randomly assigned to losartan or atenolol, we found no significant difference in the rate of aorticroot dilatation between the two treatment groups over a 3-year period

    Toward a Multifaceted Heuristic of Digital Reading to Inform Assessment, Research, Practice, and Policy

    Get PDF
    In this commentary, the author explores the tension between almost 30 years of work that has embraced increasingly complex conceptions of digital reading and recent studies that risk oversimplifying digital reading as a singular entity analogous with reading text on a screen. The author begins by tracing a line of theoretical and empirical work that both informs and complicates our understanding of digital literacy and, more specifically, digital reading. Then, a heuristic is proposed to systematically organize, label, and define a multifaceted set of increasingly complex terms, concepts, and practices that characterize the spectrum of digital reading experiences. Research that informs this heuristic is used to illustrate how more precision in defining digital reading can promote greater clarity across research methods and advance a more systematic study of promising digital reading practices. Finally, the author discusses implications for assessment, research, practice, and policy

    High-dimensional maximum marginal likelihood item factor analysis by adaptive quadrature

    Full text link
    Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/43596/1/11336_2003_Article_1141.pd

    A História da Alimentação: balizas historiográficas

    Full text link
    Os M. pretenderam traçar um quadro da História da Alimentação, não como um novo ramo epistemológico da disciplina, mas como um campo em desenvolvimento de práticas e atividades especializadas, incluindo pesquisa, formação, publicações, associações, encontros acadêmicos, etc. Um breve relato das condições em que tal campo se assentou faz-se preceder de um panorama dos estudos de alimentação e temas correia tos, em geral, segundo cinco abardagens Ia biológica, a econômica, a social, a cultural e a filosófica!, assim como da identificação das contribuições mais relevantes da Antropologia, Arqueologia, Sociologia e Geografia. A fim de comentar a multiforme e volumosa bibliografia histórica, foi ela organizada segundo critérios morfológicos. A seguir, alguns tópicos importantes mereceram tratamento à parte: a fome, o alimento e o domínio religioso, as descobertas européias e a difusão mundial de alimentos, gosto e gastronomia. O artigo se encerra com um rápido balanço crítico da historiografia brasileira sobre o tema

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    Get PDF
    Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care1 or hospitalization2,3,4 after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes—including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)—in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease
    corecore