16 research outputs found

    Learning Trajectories in Mathematics: A Foundation for Standards, Curriculum, Assessment, and Instruction

    Get PDF
    Learning Trajectories in Mathematics: A Foundation for Standards, Curriculum, Assessment, and Instruction aims to provide: A useful introduction to current work and thinking about learning trajectories for mathematics education An explanation for why we should care about these questions A strategy for how to think about what is being attempted in the field, casting some light on the varying, and perhaps confusing, ways in which the terms trajectory, progression, learning, teaching, and so on, are being used by the education community. Specifically, the report builds on arguments published elsewhere to offer a working definition of the concept of learning trajectories in mathematics and to reflect on the intellectual status of the concept and its usefulness for policy and practice. It considers the potential of trajectories and progressions for informing the development of more useful assessments and supporting more effective formative assessment practices, for informing the on-going redesign of mathematics content and performance standards, and for supporting teachers’ understanding of students’ learning in ways that can strengthen their capability for providing adaptive instruction. The authors conclude with a set of recommended next steps for research and development, and for policy

    Learning trajectories in mathematics: A foundation for standards, curriculum, assessment, and instruction

    Full text link

    Seismic Event Coda-Correlation Imaging of the Earth's Interior

    Get PDF
    Seismic coda waves are the late part of the seismic energy generated by earthquakes. Global coda correlograms are constructed by cross-correlating and stacking seismic event late coda records that are noisy and seemingly useless, but they exhibit many prominent features sensitive to the Earth's internal structure. Thus, the coda correlation rises as a new paradigm in global observational seismology. As a new category of observations, the correlation features, if interpreted correctly, can provide new information about the Earth's interior. How to accurately utilise seismic event coda correlations, for instance, in "global coda correlation tomography," has been controversial and unresolved. Some attempts treat coda correlations as reconstructed seismic waves, which is on a par with methods developed in ambient-noise correlations, for they share similar data processing and computation routines. However, that introduces erroneous interpretation because theoretical analyses have demonstrated fundamental differences in the formation mechanisms of coda correlations and ambient-noise correlations. Therefore, we need a solution, a correct approach, to allow us to use a massive amount of coda correlation observables to increase constraints on the Earth's interior. This thesis consists of theoretical analysis, method developments, and applications for utilising seismic event coda correlations to image the Earth's interior. We first conduct comprehensive analyses to 'dissect' coda correlations for their formation mechanism quantitatively. The analyses reveal the mathematical relationship between coda correlations and the Earth's internal structure. Based on that, we build a novel framework toward global coda-correlation tomography. We verify the new framework in experiments and compare it with the method based on the assumption of seismic wave reconstructions. We illustrate significant inaccuracy in tomographic images can arise if coda correlations are treated as reconstructed seismic waves. Then, in an application, we provide a new class of observations for inner-core shear-wave anisotropy utilizing coda correlations in the new framework. We find that inner-core shear waves travel faster by at least ~5 s in directions oblique to the Earth's rotation axis than directions parallel to the equatorial plane (anisotropy of >0.8%). Our inner core shear-wave anisotropy observations place new constraints on the inner core mineral composition. Finally, we extend the principles to cross-correlations between source events and devise a new way to build global inter-source correlations. We demonstrate that a single seismic station is sufficient to construct a global correlogram. The correlogram exhibits prominent features sensitive to the internal planetary structures. We show implementations to constrain the Earth's and Martian cores' sizes and confirm a large Martian core. This provides a new paradigm for imaging planetary interiors on a global scale with currently realizable resources in planetary missions

    Learning Trajectories in Mathematics: A Foundation for Standards, Curriculum, Assessment, and Instruction

    Full text link

    Adaptive error estimation in linearized ocean general circulation models

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 1999Data assimilation methods, such as the Kalman filter, are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. In this study we address the problem of estimating model and measurement error statistics from observations. We start by testing the Myers and Tapley (1976, MT) method of adaptive error estimation with low-dimensional models. We then apply the MT method in the North Pacific (5°-60° N, 132°-252° E) to TOPEX/POSEIDON sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The MT method, closely related to the maximum likelihood methods of Belanger (1974) and Dee (1995), is shown to be sensitive to the initial guess for the error statistics and the type of observations. It does not provide information about the uncertainty of the estimates nor does it provide information about which structures of the error statistics can be estimated and which cannot. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). The CMA is both a powerful diagnostic tool for addressing theoretical questions and an efficient estimator for real data assimilation studies. It can be extended to estimate other statistics of the errors, trends, annual cycles, etc. Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. After removal of trends and annual cycles, the low frequency /wavenumber (periods> 2 months, wavelengths> 16°) TOPEX/POSEIDON sea level anomaly is of the order 6 cm2. The GCM explains about 40% of that variance. By covariance matching, it is estimated that 60% of the GCM-TOPEX/POSEIDON residual variance is consistent with the reduced state linear model. The CMA is then applied to TOPEX/POSEIDON sea level anomaly data and a linearization of a global GFDL GCM. The linearization, done in Fukumori et al.(1999), uses two vertical mode, the barotropic and the first baroclinic modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCMTOPEX/ POSEIDON residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the TIP signal, which are not part of the 20 by 10 GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simultaneous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.This research was supported by NASA through Earth Science Fellowship under contract number NGTS-30084, Global Change Science Fellowship under contract number NGT-30309, contract NAG 5-3274, and NASA-JPL under contract number 958125

    Subject index volumes 1–92

    Get PDF

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Metabolic engineering and modelling of Escherichia coli for the production of succinate

    Get PDF
    Current climate issues and the ongoing depletion of oil reserves have led to an increased attention for biobased production processes. Not only the production of bio-energy but also biochemicals have gained interest. Recent reports of the US department of energy and the GROWTH program of the European commission review a comprehensive list of chemicals that can be produced via biological processes and which may be of great importance to sustain a green chemical industry in the future. Succinate is one of those biochemicals. Today, this compound is synthesised via maleic anhydride, which is produced by a petrochemical production process. The conditions which a biological production processes have to meet to be economically viable are quite strict. Such a process has to obtain a yield of 0.88 g/g, a rate between 1.8 and 2.5 g/l/h and a titer around 80 g/l. None of the available (reported) processes reach either of these values. In most cases the rate and the titer are still a problem. To optimise succinate production via metabolic engineering, first a mutation strategy has to be developed. This strategy can then be applied to a suitable production host. The choice of this host has nowadays become less important due to the recent developments in genetic engineering and synthetic biology. These developments allow the introduction or altering of almost every cellular function. What has become important is the availability of information on the potential host and its genetic accessibility. E. coli is therefore still an excellent host for the development of production processes. Since its isolation vast amounts of information have been gathered and several biological databases are devoted to it. Moreover, almost each cellular function has been modified. However, E. coli does not naturally produce succinate in large amounts. It will have to undergo some genetic modifications to overproduce this chemical. Which modifications are needed can be uncovered in silico. A functional and comparative genomics analyses of natural producing and non-producing strains revealed which genes and reactions may influence succinate production. The optimal biochemical route towards succinate is then uncovered via stoichiometric network analysis. For this analysis, elementary flux modes was combined with partial least squares regression. Both tools resulted in the identification of optimal biochemical production routes for several substrates and allowed to evaluate how reactions that do not naturally occur in E. coli may affect the succinate yield. The transport reaction is one of the reactions that could be identified by the EFM-PLS model. E. coli possesses both succinate import as well as export proteins. However, export is normally only active under anaerobic conditions and import under aerobic conditions. Therefore, the import protein was knocked out and the export protein was expressed with an artificial promoter. These modifications led to an increased succinate yield and production rate, but also revealed alternative import proteins. An analysis of the phenotype of mutant strains in these alternative importers did however not lead to increases in succinate yield. These mutations influenced biomass yield and growth rate. A second route that was identified in the stoichiometric network analysis was the glyoxylate route. This route correlated positively with succinate production and is strongly regulated by the transcription factors ArcA and IclR. In order to gain more insight into the synergy that may exist between both regulators, knock outs in both genes were studied under chemostat and batch conditions. This analysis revealed a synergetic effect between both proteins on the biomass yield. A strain in which both arcA and iclR are knocked out showed a biomass yield that approached the maximal theoretical yield. The single knock out strains did not have such an outspoken phenotype. Finally, several mutations were introduced and evaluated for succinate production and byproduct formation. The formation of acetate was studied in detail to uncover alternative acetate formation reactions. First, the known reactions, acetate kinase, phospho-acetyltransferase and pyruvate oxidase were knocked out. This resulted in a significant decrease in acetate production but not in the total elimination. Several alternatives such as citrate lyase and acetate CoA-transferase were evaluated, but without success. The remaining acetate formation reactions could not be identified. Succinate dehydrogenase can be seen as one of the most crucial enzymes for succinate production. This enzyme converts succinate into fumarate and therefore has to be knocked out to increase production. Strains that possess a succinate dehydrogenase deletion immediately show an increased production. However, pyruvate becomes one of the main byproducts. Several enzymes influence pyruvate production. The most important enzymes in the context of succinate production are PEP carboxykinase, oxaloacetate decarboxylase, malic enzyme, PEP carboxylase, and citrate synthase. The three former reactions are gluconeogenic reactions that can form futile cycles. Deletions in these genes resulted in an increase in biomass yield due to a more energy efficient metabolism, but does not increase succinate yield. Point mutations in PEP carboxylase and citrate synthase increased the flux towards the TCA cycle. The flux ratio between the glyoxylate pathway and the reductive and oxidative TCA cycle can be influenced by these enzymes. The activity of the reductive TCA is however strongly dependent on the availability of reduced equivalents. To modulate this availability a point mutation was introduced in FNR, an anaerobic transcription factor that activates the reductive TCA and represses the electron transport chain. Although none of the developed strains are economically viable yet, many of the mutations that have been introduced show great promise for future improvements. In fact, the next steps in strain development should not be to identify new targets to modify, but rather to fine tune activities of the routes towards succinate in such a way that the theoretical yields can be approached with sufficiently high rates

    Adaptive error estimation in linearized ocean general circulation models

    Get PDF
    Thesis (Ph. D.)--Joint Program in Physical Oceanography (Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 1999.Includes bibliographical references (p. 206-211).Data assimilation methods, such as the Kalman filter, are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. In this study we address the problem of estimating model and measurement error statistics from observations. We start by testing the Myers and Tapley (1976, MT) method of adaptive error estimation with low-dimensional models. We then apply the MT method in the North Pacific (5°-60° N, 132°-252° E) to TOPEX/POSEIDON sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The MT method, closely related to the maximum likelihood methods of Belanger (1974) and Dee (1995), is shown to be sensitive to the initial guess for the error statistics and the type of observations. It does not provide information about the uncertainty of the estimates nor does it provide information about which structures of the error statistics can be estimated and which cannot. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). The CMA is both a powerful diagnostic tool for addressing theoretical questions and an efficient estimator for real data assimilation studies. It can be extended to estimate other statistics of the errors, trends, annual cycles, etc. Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. After removal of trends and annual cycles, the low frequency /wavenumber (periods> 2 months, wavelengths> 16°) TOPEX/POSEIDON sea level anomaly is of the order 6 cm2. The GCM explains about 40% of that variance. By covariance matching, it is estimated that 60% of the GCM-TOPEX/POSEIDON residual variance is consistent with the reduced state linear model. The CMA is then applied to TOPEX/POSEIDON sea level anomaly data and a linearization of a global GFDL GCM. The linearization, done in Fukumori et al.(1999), uses two vertical mode, the barotropic and the first baroclinic modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCMTOPEX/ POSEIDON residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the TIP signal, which are not part of the 20 by 10 GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simultaneous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.by Michael Y. Chechelnitsky.Ph.D
    corecore