1,300 research outputs found

    Open to All? Different Cultures, Same Communities

    Get PDF
    Produced for the Interfaith Housing Center of the Northern Suburbs with the support of the Chicago Community Trust, this report aims to better understand immigrants living in the northern suburbs of Chicago -- who they are, where they live in relation to housing patterns and conditions, and the extent to which they exert political influence on local housing decisions. It was produced as part of The Chicago Community Trust's three-year Immigrant Integration Initiative, which began in 2007 to come up with strategies that could help immigrants successfully integrate into the civic and economic fabric of their new communities. A goal of this report is to provide a firm foundation for important discussions -- and decisions -- facing our communities

    A model to determine the effect of collagen fiber alignment on heart function post myocardial infarction

    Get PDF
    BACKGROUND: Adverse remodeling of the left ventricle (LV) following myocardial infarction (MI) leads to heart failure. Recent studies have shown that scar anisotropy is a determinant of cardiac function post-MI, however it remains unclear how changes in extracellular matrix (ECM) organization and structure contribute to changes in LV function. The objective of this study is to develop a model to identify potential mechanisms by which collagen structure and organization affect LV function post-MI. METHODS: A four-region, multi-scale, cylindrical model of the post-MI LV was developed. The mechanical properties of the infarct region are governed by a constitutive equation based on the uncrimping of collagen fibers. The parameters of this constitutive equation include collagen orientation, angular dispersion, fiber stiffness, crimp angle, and density. Parametric variation of these parameters was used to elucidate the relationship between collagen properties and LV function. RESULTS: The mathematical model of the LV revealed several factors that influenced cardiac function post-MI. LV function was maximized when collagen fibers were aligned longitudinally. Increased collagen density was also found to improve stroke volume for longitudinal alignments while increased fiber stiffness decreased stroke volume for circumferential alignments. CONCLUSIONS: The results suggest that cardiac function post-MI is best preserved through increased circumferential compliance. Further, this study identifies several collagen fiber-level mechanisms that could potentially regulate both infarct level and organ level mechanics. Improved understanding of the multi-scale relationships between the ECM and LV function will be beneficial in the design of new diagnostic and therapeutic technologies

    THE EPIDERMIS AND CYCLIC AMP

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/73687/1/j.1365-2133.1974.tb06390.x.pd

    Foodmaster and three stories

    Get PDF
    Graduation date: 2004The purpose of this thesis was to create a sustained piece of fiction that both\ud represented my growth as a writer throughout my time at Oregon State University, and\ud wove together a mixture of imagination, language, and creativity. My hope was to write\ud a novella that incorporated and drew from themes including work, community, and\ud family relationships, and also was an exploration in the very structure and form of literary\ud fiction.\ud After completing the novella, I found that similar themes continued to appear\ud within my fiction during my ongoing growth as a writer. What I ended up with was a\ud novella and collection of related stories that reflected the influences of my advisor Tracy\ud Daugherty and his tutelage, the courses that I took at this university and my\ud undergraduate university, and my own personal history.\ud This thesis was written over a two-year period, during which drafts of this novella\ud and stories were written and rewritten. Each story and chapter was submitted to a writing\ud workshop, read and edited by my major and minor advisor, and carefully reworked and\ud redrafted after much scrutiny and attention.\ud During the course of writing this thesis, many things influenced me, the most\ud prominent being the world of fiction that existed all around me. I was influenced by\ud fiction that I was reading in my course work, such as Donald Barthelme and Philip Roth,\ud but writers that I had grown up with, like Edgar Allen Poe and Ray Bradbury also\ud influenced me. Beyond the world of published fiction, I found not only influence, but\ud also more importantly inspiration from the work and criticism of the writers and students\ud within the Creative Writing Program here at Oregon State University.\ud The end result of these two years of work, study, writing, and criticism was a\ud piece of fiction that I am proud of, and plan to publish. This collection of fiction\ud represents not only a sustained study on the craft of creative writing, but also serves an\ud exploration of my own voice and style, and an awakening of my identity as a fiction\ud writer

    Viscoelasticity and metastability limit in supercooled liquids

    Full text link
    A supercooled liquid is said to have a kinetic spinodal if a temperature Tsp exists below which the liquid relaxation time exceeds the crystal nucleation time. We revisit classical nucleation theory taking into account the viscoelastic response of the liquid to the formation of crystal nuclei and find that the kinetic spinodal is strongly influenced by elastic effects. We introduce a dimensionless parameter \lambda, which is essentially the ratio between the infinite frequency shear modulus and the enthalpy of fusion of the crystal. In systems where \lambda is larger than a critical value \lambda_c the metastability limit is totally suppressed, independently of the surface tension. On the other hand, if \lambda < \lambda_c a kinetic spinodal is present and the time needed to experimentally observe it scales as exp[\omega/(\lambda_c-\lambda)^2], where \omega is roughly the ratio between surface tension and enthalpy of fusion

    Theory and Application of Dissociative Electron Capture in Molecular Identification

    Get PDF
    The coupling of an electron monochromator (EM) to a mass spectrometer (MS) has created a new analytical technique, EM-MS, for the investigation of electrophilic compounds. This method provides a powerful tool for molecular identification of compounds contained in complex matrices, such as environmental samples. EM-MS expands the application and selectivity of traditional MS through the inclusion of a new dimension in the space of molecular characteristics--the electron resonance energy spectrum. However, before this tool can realize its full potential, it will be necessary to create a library of resonance energy scans from standards of the molecules for which EM-MS offers a practical means of detection. Here, an approach supplementing direct measurement with chemical inference and quantum scattering theory is presented to demonstrate the feasibility of directly calculating resonance energy spectra. This approach makes use of the symmetry of the transition-matrix element of the captured electron to discriminate between the spectra of isomers. As a way of validating this approach, the resonance values for twenty-five nitrated aromatic compounds were measured along with their relative abundance. Subsequently, the spectra for the isomers of nitrotoluene were shown to be consistent with the symmetry-based model. The initial success of this treatment suggests that it might be possible to predict negative ion resonances and thus create a library of EM-MS standards.Comment: 18 pages, 7 figure

    Unbiased Comparative Evaluation of Ranking Functions

    Full text link
    Eliciting relevance judgments for ranking evaluation is labor-intensive and costly, motivating careful selection of which documents to judge. Unlike traditional approaches that make this selection deterministically, probabilistic sampling has shown intriguing promise since it enables the design of estimators that are provably unbiased even when reusing data with missing judgments. In this paper, we first unify and extend these sampling approaches by viewing the evaluation problem as a Monte Carlo estimation task that applies to a large number of common IR metrics. Drawing on the theoretical clarity that this view offers, we tackle three practical evaluation scenarios: comparing two systems, comparing kk systems against a baseline, and ranking kk systems. For each scenario, we derive an estimator and a variance-optimizing sampling distribution while retaining the strengths of sampling-based evaluation, including unbiasedness, reusability despite missing data, and ease of use in practice. In addition to the theoretical contribution, we empirically evaluate our methods against previously used sampling heuristics and find that they generally cut the number of required relevance judgments at least in half.Comment: Under review; 10 page

    Predicting software Size and Development Effort: Models Based on Stepwise Refinement

    Get PDF
    This study designed a Software Size Model and an Effort Prediction Model, then performed an empirical analysis of these two models. Each model design began with identifying its objectives, which led to describing the concept to be measured and the meta-model. The numerical assignment rules were then developed, providing a basis for size measurement and effort prediction across software engineering projects. The Software Size Model was designed to test the hypothesis that a software size measure represents the amount of knowledge acquired and stored in software artifacts, and the amount of time it took to acquire and store this knowledge. The Effort Prediction Model is based on the estimation by analogy approach and was designed to test the hypothesis that this model will produce reasonably close predictions when it uses historical data that conforms to the Software Size Model. The empirical study implemented each model, collected and recorded software size data from software engineering project deliverables, simulated effort prediction using the jack knife approach, and computed the absolute relative error and magnitude of relative error (MRE) statistics. This study resulted in 35.3% of the predictions having an MRE value at or below twenty-five percent. This result satisfies the criteria established for the study of having at least 31 % of the predictions with a MRE of25% or less. This study is significant for three reasons. First, no subjective factors were used to estimate effort. The elimination of subjective factors removes a source of error in the predictions and makes the study easier to replicate. Second, both models were described using metrology and measurement theory principles. This allows others to consistently implement the models and to modify these models while maintaining the integrity of the models\u27 objectives. Third, the study\u27s hypotheses were validated even though the software artifacts used to collect the software size data varied significantly in both content and quality. Recommendations for further study include applying the Software Size Model to other data-driven estimation models, collecting and using software size data from industry projects, looking at alternatives for how text-based software knowledge is identified and counted, and studying the impact of project cycles and project roles on predicting effort

    CUTANEOUS ANGIITIS (VASCULITIS)

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/66185/1/j.1365-4362.1978.tb06119.x.pd

    Evaluating Variable-Length Multiple-Option Lists in Chatbots and Mobile Search

    Full text link
    In recent years, the proliferation of smart mobile devices has lead to the gradual integration of search functionality within mobile platforms. This has created an incentive to move away from the "ten blue links'' metaphor, as mobile users are less likely to click on them, expecting to get the answer directly from the snippets. In turn, this has revived the interest in Question Answering. Then, along came chatbots, conversational systems, and messaging platforms, where the user needs could be better served with the system asking follow-up questions in order to better understand the user's intent. While typically a user would expect a single response at any utterance, a system could also return multiple options for the user to select from, based on different system understandings of the user's intent. However, this possibility should not be overused, as this practice could confuse and/or annoy the user. How to produce good variable-length lists, given the conflicting objectives of staying short while maximizing the likelihood of having a correct answer included in the list, is an underexplored problem. It is also unclear how to evaluate a system that tries to do that. Here we aim to bridge this gap. In particular, we define some necessary and some optional properties that an evaluation measure fit for this purpose should have. We further show that existing evaluation measures from the IR tradition are not entirely suitable for this setup, and we propose novel evaluation measures that address it satisfactorily.Comment: 4 pages, in Proceeding of SIGIR 201
    • …
    corecore