2,548 research outputs found
Recommended from our members
Sustaining Argument: Centralizing the Role of the Writing Center in Program Assessment
In âWhy Assessment?â (2009), Gerald Graff argues that the critical conversations arising from regular program assessment are often as important as the actual findings themselves: outcomes assessment, he writes, is not only fundamental to measuring studentsâ performance, but potentially âtransformativeâ in terms of creating a recognizable dialogue about â and a more lively institutional culture of â good teaching (153). Agreeing with Graffâs claim, I argue that writing centers should take an active, if not central, role in the assessment of writing program outcomes by positioning themselves at the center of the evaluation process. My experiences as a writing center director involved in our universityâs less-than-three-year-old writing program assessment has led me to this conclusion.University Writing Cente
Nonmodal Growth of TravelingWaves on Blunt Cones at Hypersonic Speeds
The existing database of transition measurements in hypersonic ground facilities has established that, as the nosetip bluntness is increased, the onset of boundary layer transition over a circular cone at zero angle of attack shifts downstream. However, this trend is reversed at sufficiently large values of the nose Reynolds number, so that the transition onset location eventually moves upstream with a further increase in nose-tip bluntness. Because modal amplification is too weak to initiate transition at moderate-to-large bluntness values, nonmodal growth has been investigated as the potential basis for a physics-based model for the frustum transition. The present analysis investigates the nonmodal growth of traveling disturbances initiated within the nose-tip vicinity that peak within the entropy layer. Results show that, with increasing nose bluntness, both planar and oblique traveling disturbances experience appreciable energy amplification up to successively higher frequencies. For moderately blunt cones, the initial nonmmodal growth is followed by a partial decay that is more than overcome by an eventual, modal growth as Mack-mode waves. For larger bluntness values, the Mack-mode waves are not amplified anywhere upstream of the experimentally measured transition location, but the traveling modes still undergo a significant amount of nonmodal growth. This finding does not provide a definitive link between optimal growth and the onset of transition, but it is qualitatively consistent with the experimental observations that frustum transition in the absence of sufficient Mack-mode amplification implies a double peak in disturbance amplification and the appearance of transitional events above the boundary-layer edge
Application of Monte Carlo Algorithms to the Bayesian Analysis of the Cosmic Microwave Background
Power spectrum estimation and evaluation of associated errors in the presence
of incomplete sky coverage; non-homogeneous, correlated instrumental noise; and
foreground emission is a problem of central importance for the extraction of
cosmological information from the cosmic microwave background. We develop a
Monte Carlo approach for the maximum likelihood estimation of the power
spectrum. The method is based on an identity for the Bayesian posterior as a
marginalization over unknowns. Maximization of the posterior involves the
computation of expectation values as a sample average from maps of the cosmic
microwave background and foregrounds given some current estimate of the power
spectrum or cosmological model, and some assumed statistical characterization
of the foregrounds. Maps of the CMB are sampled by a linear transform of a
Gaussian white noise process, implemented numerically with conjugate gradient
descent. For time series data with N_{t} samples, and N pixels on the sphere,
the method has a computational expense $KO[N^{2} +- N_{t} +AFw-log N_{t}],
where K is a prefactor determined by the convergence rate of conjugate gradient
descent. Preconditioners for conjugate gradient descent are given for scans
close to great circle paths, and the method allows partial sky coverage for
these cases by numerically marginalizing over the unobserved, or removed,
region.Comment: submitted to Ap
Improving rainfall nowcasting and urban runoff forecasting through dynamic radar-raingauge rainfall adjustment
The insufficient accuracy of radar rainfall estimates is a major source of uncertainty in short-term quantitative precipitation forecasts (QPFs) and associated urban flood forecasts. This study looks at the possibility of improving QPFs and urban runoff forecasts through the dynamic adjustment of radar rainfall estimates based on raingauge measurements. Two commonly used techniques (Kriging with External Drift (KED) and mean field bias correction) were used to adjust radar rainfall estimates for a large area of the UK (250,000 km2) based on raingauge data. QPFs were produced using original radar and adjusted rainfall estimates as input to a nowcasting algorithm. Runoff forecasts were generated by feeding the different QPFs into the storm water drainage model of an urban catchment in London. The performance of the adjusted precipitation estimates and the associated forecasts was tested using local rainfall and flow records. The results show that adjustments done at too large scales cannot provide tangible improvements in rainfall estimates and associated QPFs and runoff forecasts at small scales, such as those of urban catchments. Moreover, the results suggest that the KED adjusted rainfall estimates may be unsuitable for generating QPFs, as this method damages the continuity of spatial structures between consecutive rainfall fields
Dissecting regulatory T cell expansion using polymer microparticles presenting defined ratios of self-antigen and regulatory cues
Biomaterials allow for the precision control over the combination and release of cargo needed to engineer cell outcomes. These capabilities are particularly attractive as new candidate therapies to treat autoimmune diseases, conditions where dysfunctional immune cells create pathogenic tissue environments during attack of self-molecules termed self-antigens. Here we extend past studies showing combinations of a small molecule immunomodulator co-delivered with self-antigen induces antigen-specific regulatory T cells. In particular, we sought to elucidate how different ratios of these components loaded in degradable polymer particles shape the antigen presenting cell (APC) -T cell interactions that drive differentiation of T cells toward either inflammatory or regulatory phenotypes. Using rapamycin (rapa) as a modulatory cue and myelin self-peptide (myelin oligodendrocyte glycoprotein- MOG) â self-antigen attacked during multiple sclerosis (MS), we integrate these components into polymer particles over a range of ratios and concentrations without altering the physicochemical properties of the particles. Using primary cell co-cultures, we show that while all ratios of rapa:MOG significantly decreased expression of co-stimulation molecules on dendritic cells (DCs), these levels were insensitive to the specific ratio. During co-culture with primary T cell receptor transgenic T cells, we demonstrate that the ratio of rapa:MOG controls the expansion and differentiation of these cells. In particular, at shorter time points, higher ratios induce regulatory T cells most efficiently, while at longer time points the processes are not sensitive to the specific ratio. We also found corresponding changes in gene expression and inflammatory cytokine secretion during these times. The in vitro results in this study contribute to in vitro regulatory T cell expansion techniques, as well as provide insight into future studies to explore other modulatory effects of rapa such as induction of maintenance or survival cues
Methanotrophic Bacteria for Nutrient Removal from Wastewater: Attached Film System
It was hypothesized that nutrient removal from wastewater could be achieved by using methane oxidizing bacteria (methanotrophs). Because methane is inexpensive. it can be used as an energy source to encourage bacterial growth to assimilate nitrogen and phosphorus and other trace elements. This initial feasibility study used synthetic nutrient mixtures and secondary sewage effluent as feed to a laboratory-scale methanotrophic attached-film expanded bed (MAFEB) reactor operated at 35°C and 20°C. The MAFEB system operated successfully at low nutrient concentrations under a variety of nutrient-limited conditions. Using a synthetic nutrient mixture with a nitrogen:phosphorus feed ratio (w/w) of 9:1, phosphate concentrations were reduced from 1.3 mg P/ L to below 0.1 mg P/ L, and ammonia was reduced from 12 mg N/L to approximately 1 mg N/L on a continuous flow basis, with a bed hydraulic retention time of 4.8 hours. The average nutrient uptake rates from synthetic nutrient mixtures were 100 mg nitrogen and 10 mg phosphorus/L of expanded bed/d. Nutrient assimilation rates increased with increasing growth rate and with increasing temperature. Nitrogen/phosphorus uptake ratios varied from 8 to 13, and the observed yield varied from 0.11 to 0.16 g volatile solids (VS)/g chemical oxygen demand (COD). Nutrient removal from secondary sewage effluent was successfully demonstrated using sewage effluent from two local treatment plants. Nutrient concentrations of 10-15 mg N/L and 1.0-1.8 mg P/L were reduced consistently below 1 mg N/L and 0.1 mg P/L. No supplemental nutrients were added to the sewage to attain these removal efficiencies since the nutrient mass ratios were similar to that required by the methanotrophs. Removal rates were lower at 20°C than at 35°C, but high removal efficiencies were maintained at both temperatures. Effluent suspended solids concentrations ranged from 8 to 30 mg volatile suspended solids (VSS)/L, and the effluent soluble COD concentration averaged 30 mg/L
A Hybrid N-body--Coagulation Code for Planet Formation
We describe a hybrid algorithm to calculate the formation of planets from an
initial ensemble of planetesimals. The algorithm uses a coagulation code to
treat the growth of planetesimals into oligarchs and explicit N-body
calculations to follow the evolution of oligarchs into planets. To validate the
N-body portion of the algorithm, we use a battery of tests in planetary
dynamics. Several complete calculations of terrestrial planet formation with
the hybrid code yield good agreement with previously published calculations.
These results demonstrate that the hybrid code provides an accurate treatment
of the evolution of planetesimals into planets.Comment: Astronomical Journal, accepted; 33 pages + 11 figure
Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation
We revisit the problem of exact CMB likelihood and power spectrum estimation
with the goal of minimizing computational cost through linear compression. This
idea was originally proposed for CMB purposes by Tegmark et al.\ (1997), and
here we develop it into a fully working computational framework for large-scale
polarization analysis, adopting \WMAP\ as a worked example. We compare five
different linear bases (pixel space, harmonic space, noise covariance
eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise
covariance eigenvectors) in terms of compression efficiency, and find that the
computationally most efficient basis is the signal-to-noise eigenvector basis,
which is closely related to the Karhunen-Loeve and Principal Component
transforms, in agreement with previous suggestions. For this basis, the
information in 6836 unmasked \WMAP\ sky map pixels can be compressed into a
smaller set of 3102 modes, with a maximum error increase of any single
multipole of 3.8\% at , and a maximum shift in the mean values of a
joint distribution of an amplitude--tilt model of 0.006. This
compression reduces the computational cost of a single likelihood evaluation by
a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust
likelihood by implicitly regularizing nearly degenerate modes. Finally, we use
the same compression framework to formulate a numerically stable and
computationally efficient variation of the Quadratic Maximum Likelihood
implementation that requires less than 3 GB of memory and 2 CPU minutes per
iteration for , rendering low- QML CMB power spectrum
analysis fully tractable on a standard laptop.Comment: 13 pages, 13 figures, accepted by ApJ
- âŠ