2,273 research outputs found

    The effect of energy input on the aggregation rate in an oscillating multi-grid reactor

    Get PDF
    Includes bibliographical references.Aggregation is a size enlargement mechanism which can be either desirable or inconvenient depending on the process involved. It involves the coming together or collision of two or more particles to form a larger single particle. It is a mechanism which is poorly understood and much research is still required before the intricacies of the process are fully grasped. Although it is known that aggregation is influenced by energy input (amongst other factors), the relationship between the two has previously been measured in anisotropic turbulent environments. Thus although aggregation is a function of local energy input, it has yet to be measured in environments where the local energy dissipation is well understood, or else it has been studied in low Reynolds number environments. This thesis aims to address this deficiency by studying the effect of local energy input on the aggregation rate in a well characterised environment of isotropic and homogenous turbulence

    Constructive or Disruptive? How Active Learning Environments Shape Instructional Decision-Making

    Get PDF
    This study examined instructional shifts associated with teaching in environments optimized for active learning, including how faculty made decisions about teaching and their perceptions of how students responded to those changes. The interviews and subsequent analysis reveal a broad range of course changes, from small modifications of existing activities to large shifts towards collaborative learning, many of which emerged during the term rather than being planned in advance. The faculty discuss several factors that influenced their decisions, including prior experience, professional identity, student engagement, and perceived and realized affordances of the environments

    Senior Recital:Mathew Reynolds, Trumpet Brian N. Weidner, Trumpet

    Get PDF
    Kemp Recital Hall Sunday Afternoon November 19, 2000 1:15 p.m

    Wavelet compression techniques for hyperspectral data

    Get PDF
    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm

    Cryopreservation critical process parameters: Impact on post-thaw recovery of cellular product

    Get PDF
    Technological advances have transformed cells from mere drug targets into potent ‘living drugs’ with the potential to cure formerly incurable diseases such as cancer. Such Regenerative Medicine Advanced Therapies (RMATs) require stringent and complex vein-to-vein support to deliver their intended function. Cold Chain pertains to strategies designed to ensure the product potency and efficacy during ex-vivo transition, and includes various components starting from the source material collection, to culture and expansion process, formulation, fill-finish and packaging, storage, transportation, chain of custody, and delivery. Biopreservation is the overarching theme of the cold chain and refers to strategies to slow down or fully suspend the biological clock to allow logistical considerations. The two main modes of biopreservation are hypothermic storage and cryopreservation. This presentation aims to map the connection between a specific biopreservation strategy, namely cryopreservation, and formulation and fill-finish, and how implementation of Biopreservation Best Practices can improve the outcome of Cold Chain. There is more to Biopreservation than storage on ice or freezing at a rate of -1°C/min in 10% DMSO. To comprehend the rationale behind Biopreservation Best Practices, a basic understanding of cellular response to cold and freezing is essential. In this study, we highlight critical process parameters (CPPs) of cryopreservation, such as freezing and thawing rates, storage and post-thaw stability, and container type, among others. Using a Jurkat T-cell model, we will discuss the impact of these CPPs on critical quality attributes (CQAs) such as viability, yield, proliferation rate, and return to function. We will also discuss the connection between variability in CPPs and characterization assay results. In general, implementation of best practices in formulation can directly address multiple process bottlenecks, including GMP compliance, minimizing freezing damage, support stability during storage and against transient warming events, support post-thaw stability, and excipient use. The CQAs may also be significantly improved by adjusting a few parameters in the freezing profile. For example, a missed or improper nucleation step during freezing may result in decreased recovery and increased variability in post-thaw proliferation rates. We have also found that the feeding timeline prior to freezing can have a profound impact on post-thaw viability and recovery in Jurkat T cells. While discussing these results, we will also review the underlying biophysics of such phenomena. The basic knowledge of designing a freezing profile may introduce degrees of freedom to process engineers to minimize the DMSO concentration in the formulation, and improve the CQAs of “hard-to-freeze” cells such as Natural Killer (NK) cells. We will also discuss the interplay between the cryopreservation CPPs and the choice of container format and how it may impact the CQAs. Please click Additional Files below to see the full abstract

    A blinded determination of H0H_0 from low-redshift Type Ia supernovae, calibrated by Cepheid variables

    Get PDF
    Presently a >3σ{>}3\sigma tension exists between values of the Hubble constant H0H_0 derived from analysis of fluctuations in the Cosmic Microwave Background by Planck, and local measurements of the expansion using calibrators of type Ia supernovae (SNe Ia). We perform a blinded reanalysis of Riess et al. 2011 to measure H0H_0 from low-redshift SNe Ia, calibrated by Cepheid variables and geometric distances including to NGC 4258. This paper is a demonstration of techniques to be applied to the Riess et at. 2016 data. Our end-to-end analysis starts from available CfA3 and LOSS photometry, providing an independent validation of Riess et al. 2011. We obscure the value of H0H_0 throughout our analysis and the first stage of the referee process, because calibration of SNe Ia requires a series of often subtle choices, and the potential for results to be affected by human bias is significant. Our analysis departs from that of Riess et al. 2011 by incorporating the covariance matrix method adopted in SNLS and JLA to quantify SN Ia systematics, and by including a simultaneous fit of all SN Ia and Cepheid data. We find H0=72.5±3.1H_0 = 72.5 \pm 3.1 (stat) ±0.77\pm 0.77 (sys) km s1^{-1} Mpc1^{-1} with a three-galaxy (NGC 4258+LMC+MW) anchor. The relative uncertainties are 4.3% statistical, 1.1% systematic, and 4.4% total, larger than in Riess et al. 2011 (3.3% total) and the Efstathiou 2014 reanalysis (3.4% total). Our error budget for H0H_0 is dominated by statistical errors due to the small size of the supernova sample, whilst the systematic contribution is dominated by variation in the Cepheid fits, and for the SNe Ia, uncertainties in the host galaxy mass dependence and Malmquist bias.Comment: 38 pages, 13 figures, 13 tables; accepted for publication in MNRA

    PROGRAM MANAGER DECISION-MAKING IN COMPLEX AND CHAOTIC PROGRAM ENVIRONMENTS

    Get PDF
    Our project focuses on the decision-making process of a program manager (PM). A defense program manager is routinely exposed to chaotic and complex environments that require skilled leadership and decision-making. Exploring the decision-making process in these environments may help current and future defense programs to better project the outcome of future decisions. Through our research, we identified five categories as decision-making pitfalls for PMs: overly optimistic, risk aversion, stovepipe design, strategic networking in the acquisition environment, and communication skills. We recommend conducting future research to validate the findings of our study. Once validated, we recommend refining PM training to focus on the decision-making categories we identified to help PMs navigate programs more successfully.Approved for public release. Distribution is unlimited.Lieutenant, United States NavyLieutenant, United States NavyLieutenant, United States Nav

    What Confidence Should We Have in Grade?

    Get PDF
    Rationale, Aims, and Objectives: Confidence (or belief) that a therapy is effective is essential to practicing clinical medicine. GRADE, a popular framework for developing clinical recommendations, provides a means for assigning how much confidence one should have in a therapy's effect estimate. One's level of confidence (or “degree of belief”) can also be modelled using Bayes theorem. In this paper, we look through both a GRADE and Bayesian lens to examine how one determines confidence in the effect estimate. Methods: Philosophical examination. Results: The GRADE framework uses a criteria‐based method to assign a quality of evidence level. The criteria pertain mostly to considerations of methodological rigour, derived from a modified evidence‐based medicine evidence hierarchy. The four levels of quality relate to the level of confidence one should have in the effect estimate. The Bayesian framework is not bound by a predetermined set of criteria. Bayes theorem shows how a rational agent adjusts confidence (ie, degree of belief) in the effect estimate on the basis of the available evidence. Such adjustments relate to the principles of incremental confirmation and evidence proportionism. Use of the Bayesian framework reveals some potential pitfalls in GRADE's criteria‐based thinking on confidence that are out of step with our intuitions on evidence. Conclusions: A rational thinker uses all available evidence to formulate beliefs. The GRADE criteria seem to suggest that we discard some of that information when other, more favoured information (eg, derived from clinical trials) is available. The GRADE framework should strive to ensure that the whole evidence base is considered when determining confidence in the effect estimate. The incremental value of such evidence on determining confidence in the effect estimate should be assigned in a manner that is theoretically or empirically justified, such that confidence is proportional to the evidence, both for and against it
    corecore