53 research outputs found

    Canagliflozin and renal outcomes in type 2 diabetes and nephropathy

    Get PDF
    BACKGROUND Type 2 diabetes mellitus is the leading cause of kidney failure worldwide, but few effective long-term treatments are available. In cardiovascular trials of inhibitors of sodium–glucose cotransporter 2 (SGLT2), exploratory results have suggested that such drugs may improve renal outcomes in patients with type 2 diabetes. METHODS In this double-blind, randomized trial, we assigned patients with type 2 diabetes and albuminuric chronic kidney disease to receive canagliflozin, an oral SGLT2 inhibitor, at a dose of 100 mg daily or placebo. All the patients had an estimated glomerular filtration rate (GFR) of 30 to <90 ml per minute per 1.73 m2 of body-surface area and albuminuria (ratio of albumin [mg] to creatinine [g], >300 to 5000) and were treated with renin–angiotensin system blockade. The primary outcome was a composite of end-stage kidney disease (dialysis, transplantation, or a sustained estimated GFR of <15 ml per minute per 1.73 m2), a doubling of the serum creatinine level, or death from renal or cardiovascular causes. Prespecified secondary outcomes were tested hierarchically. RESULTS The trial was stopped early after a planned interim analysis on the recommendation of the data and safety monitoring committee. At that time, 4401 patients had undergone randomization, with a median follow-up of 2.62 years. The relative risk of the primary outcome was 30% lower in the canagliflozin group than in the placebo group, with event rates of 43.2 and 61.2 per 1000 patient-years, respectively (hazard ratio, 0.70; 95% confidence interval [CI], 0.59 to 0.82; P=0.00001). The relative risk of the renal-specific composite of end-stage kidney disease, a doubling of the creatinine level, or death from renal causes was lower by 34% (hazard ratio, 0.66; 95% CI, 0.53 to 0.81; P<0.001), and the relative risk of end-stage kidney disease was lower by 32% (hazard ratio, 0.68; 95% CI, 0.54 to 0.86; P=0.002). The canagliflozin group also had a lower risk of cardiovascular death, myocardial infarction, or stroke (hazard ratio, 0.80; 95% CI, 0.67 to 0.95; P=0.01) and hospitalization for heart failure (hazard ratio, 0.61; 95% CI, 0.47 to 0.80; P<0.001). There were no significant differences in rates of amputation or fracture. CONCLUSIONS In patients with type 2 diabetes and kidney disease, the risk of kidney failure and cardiovascular events was lower in the canagliflozin group than in the placebo group at a median follow-up of 2.62 years

    25th annual computational neuroscience meeting: CNS-2016

    Get PDF
    The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong

    Test Architecture

    Get PDF
    Test design can be compared with the design of buildings and other structures in architecture. Both activities require the development of detailed plans and blueprints that generate the actual buildings or test forms. When the blueprints are created, architects know what to what use the building is going to be put. Without knowledge of purpose they simply would not be able to design a building. Similarly, test designers need to know what inferences we intend to make from scores, and what decisions are to be made on the basis of those scores. Tests without purpose generate validity chaos. Similarly, when buildings change their use, architects must retrofit the building and follow standard procedures to ensure that health and safety regulations are being met, and that the proposed changes make the building fit for its new users. We argue that test designers must follow similar principles if the purpose of a test is to be changed or extended, or used on a group of test takers for whom it was not originally intended. We term this process test retrofit, and use the example of immigration testing to illustrate the argument

    Practical language testing

    No full text
    xvi, 352 p.: bibl.ref., gloss., index; 23 c

    Testing second language speaking/ Fulcher

    No full text
    xxi, 288 hal : ill.; 23 cm

    Assessing Spoken Production

    Full text link
    Spoken production is assessed using a scale that aids the rating process. The scale increases the reliability with which different assessors will arrive at the same judgment about a learner’s current proficiency. Teachers can use scales adapted from large-scale assessments, or devise their own for specific uses in a local context. There are two broad approaches to scale development, the intuitive and the empirical. Both have advantages and weaknesses that must be evaluated given its intended use and learner group. However, a scale should have properties that support the development of teaching teams, and the articulation of explicit curriculum goals. This tends to favor locally designed scales that are sensitive to a specific learning ecology. When teaching teams articulate learning goals in scale descriptors it is possible to enhance task-based learning, learner awareness, and independence

    Context and Inference in Language Testing

    Full text link
    It is arguably the case that “The purpose of language testing is always to render information to aid in making intelligent decisions about possible courses of action” (Carroll, 1961, p. 314). This holds true whether the decisions are primarily pedagogic, or affect the future education or employment of the test taker. If fair and useful decisions are to be made, three conditions must hold. Firstly, valid inferences must be made about the meaning of test scores. Secondly, score meaning must be relevant and generalizable to a real-world domain. Thirdly, score meaning should be (at least partially) predictive of, post-decision performance. If any of these conditions are not met the process of assessment and decision making may be questioned not only in theory, but in the courts (Fulcher, 2014a). It is therefore not surprising that historically, testing practice has rested on the assumption that language competence, however defined, is a relatively stable cognitive trait. This is expressed clearly in classic statements of the role of measurement in the ‘human sciences’, such as this by the father of American psychology, James McKeen Cattell: One of the most important objects of measurement…is to obtain a general knowledge of the capacities of a man by sinking shafts, as it were, at a few critical points. In order to ascertain the best points for the purpose, the sets of measures should be compared with an independent estimate of the man’s powers. We thus may learn which of the measures are the most instructive (Cattell, 1890, p. 380). The purely cognitive conception of language proficiency (and all human ability) is endemic to most branches of psychology, and psychometrics. This strong brand of realism assumes that variation in test scores is a direct causal effect of the variation of the trait within an individual (see the extensive discussion of validity theory in Fulcher, 2014b). This view of the world entails that any contextual feature that causes variation is a contaminant that pollutes the score. This is referred to as ‘construct-irrelevant variance’ (Messick, 1989, pp. 38–9). The standardization of testing processes from presentation to administration and scoring, is designed to minimize the impact of context on scores. In some ways, a good test is like an experiment, in the sense that it must eliminate or at least keep constant all extraneous sources of variation. We want our tests to reflect only the particular kind of variation in knowledge or skill that we are interested in at the moment (Carroll, 1961, p. 319). There are also ethical and legal imperatives that encourage this approach to language testing and assessment. If the outcomes of a test are high-stakes, it is incumbent upon the test provider to ensure that every test taker has an equal chance of achieving the same test score if they are of identical ability. Score variation due to construct-irrelevant factors is termed ‘bias.’ If any test taker is disadvantaged by variation in the context of testing, and particularly if this is true of an identifiable sub-group of the test taking population, litigation is likely. Language tests are therefore necessarily abstractions from real life. The degree of removal may be substantial, as in the case of a multiple-choice test, or less distant, in the case of a performance-based simulation. However, tests never reproduce the variability that is present in the real world. One analogy that illustrates the problem of context is that of tests for life guards. Fulcher (2010, pp. 97–100) demonstrates the impossibility of reproducing in a test all the conditions under which a life guard may have to operate – weather conditions, swell, currents, tides, distance from shore, victim condition and physical build. The list is potentially endless. Furthermore, health and safety regulations would preclude replicating many of the extremes that could occur within each facet. The solution is to list constructs that are theoretically related to real world performance, such as stamina, endurance, courage, and so on. The test of stamina (passive drowning victim rear rescue and extraction from a swimming pool, using an average weight/size model) is assumed to be generalizable to many different conditions, and predict the ability of the test taker to successfully conduct rescues in non-pool domains. The strength of the relationship between the test and real-world performance is an empirical matter. Recognizing the impact of context on test performance may initially look like a serious challenge to the testing enterprise, as score meaning must thereafter be constructed from much more than individual ability. McNamara (1995) referred to this as ‘opening Pandora’s box’, allowing all the plagues of the real world to infect the purity of the link between a score and the mind of the person from whom it was derived. While this may be true in the more radical constructivist treatments of context in language testing, I believe that validity theory is capable of taking complex context into account while maintaining score generalizability for practical decision making purposes. In the remainder of this chapter I first consider three stances towards context: atomism, neobehaviourism, and interactionism. This classification is familiar from other fields of applied linguistics, but in language testing each has distinctive implications. Each is described, and then discussed under two sub-headings of generalizability and provlepsis. Generalizability is concerned with the breadth or scope of score meaning beyond the immediate context of the test. The latter term is taken from the Greek Προβλέψεις, which I use to refer to refer to the precision with which a score user may use the outcome of a test to look into the future and make predictions about the likely performance of the test taker. Is the most appropriate analogy for the test a barometer, or a crystal ball? I conclude by considering how it is possible to take context seriously within a field that by necessity must decontextualize to remain ethical and legal

    The Practice of Language Assessment

    Full text link
    [First paragraph] Language teachers and applied linguists are constantly engaged with language testing and assessment. Whether informal or formal, assessment procedures are designed to collect evidence for decision making. In low-stakes classroom contexts the evidence is used to provide formative feedback to learners (Rea-Dickins, 2006), scaffold learning (Lantolf, 2009), evaluate the quality and pace of learning, inform pedagogy, and feed into curriculum design (Shepard, 2000; Hill, 2012). In high-stakes contexts identifiable ‘test events’ produce scores that are used to make life changing decisions about the future of learners (Kunnan, 2013). In research, tests are used to generate data that are used to address questions regarding second language acquisition (Chapelle, 1994)

    Assessment literacy for the language classroom

    Full text link
    Language testing has seen unprecedented expansion during the first part of the 21st century. As a result there is an increasing need for the language testing profession to consider more precisely what it means by “assessment literacy” and to articulate its role in the creation of new pedagogic materials and programs in language testing and assessment to meet the changing needs of teachers and other stakeholders for a new age. This article describes a research project in which a survey instrument was developed, piloted, and delivered on the Internet to elicit the assessment training needs of language teachers. The results were used to inform the design of new teaching materials and the further development of online resources that could be used to support program delivery. The project makes two significant contributions. First, it provides new empirically derived content for the concept of assessment literacy within which to frame materials development and teaching. Second, it uncovered methodological problems with existing survey techniques that may have impacted upon earlier studies, and solutions to these problems are suggested
    corecore