1,059 research outputs found

    Europeanisation at the Urban Level: Local Actors, Institutions and the Dynamics of Multi-Level Interaction

    Get PDF
    Involvement in EU-sponsored programmes has provided urban institutions and actors across Europe with unprecedented access to new sources of information, legitimacy, and not least, financial support. From established local authorities to fledgling neighbourhood partnerships, actors across the urban spectrum see increased European involvement as a central component of innovative governance. This paper seeks to evaluate whether European working has provoked shifts in the institutionalised norms, beliefs, and values held by participants in governance at the city level, focusing in particular on the experience of British cities. In order to do so, the paper elaborates a four-part framework for Europeanisation at the urban level, and subsequently applies this framework to the empirical cases of Birmingham and Glasgow. It then attempts to draw some preliminary conclusions about how involvement in EU Structural Fund programmes affects embedded norms and practices in cities across the continent

    DETERMINATION OF CRITICAL EXPERIMENT CORRELATIONS VIA THE MONTE CARLO SAMPLING TECHNIQUE

    Get PDF
    Critical benchmark experiments are the foundation of validation of the computational codes used in criticality safety analyses because they provide a basis for comparison between the calculated results and the physical world. These experiments are often performed in series varying a limited number of parameters to isolate the effect of the independent parameter. The use of common materials, geometries, machines, procedures, detectors, or other shared features can create correlations among the resulting experiments. Most validation techniques used in criticality safety practice do not treat these correlations explicitly, and the effect of this is unclear as the correlations themselves are not well known. Generalized linear least squares methods used for advanced validation or in data adjustment studies also rely on correlation coefficients to constrain the adjustments allowed in critical experiment results. The purpose of this dissertation is to develop a methodology for the calculation of critical experiment correlations using a Monte Carlo sampling technique. The use of this technique allows for the determination of the uncertainty in each individual experiment, and identical perturbations applied to shared parameters provide estimates of the covariance between the experiments. The correlation coefficient is then calculated by dividing the covariance between any pair of experiments by the product of the individual experiment standard deviations. This technique is applied to high-enriched uranium solution experiments and low-enriched uranium pin lattice experiments to determine correlation coefficients for these types of systems. The important parameters governing the correlation coefficients are determined, and the results are compared with correlation coefficients in the literature determined using other methods at other institutions. The general method for the determination of the correlation coefficients is presented along with other conclusions and recommendations for further study in this area

    Power distribution calculations in the High Flux Isotope Reactor for various control blade tantalum loadings

    Get PDF
    The High Flux Isotope Reactor (HFIR), located at the Oak Ridge National Laboratory, has seen occasional cladding damage (local swelling or blistering) in the reactor control elements over the past several years. The control elements are located in an annulus between the core and reflector, and contain three regions: a black region containing EU2O3 dispersed in an aluminum matrix; a gray region with Tantalum particles in an aluminum matrix; and a white region (or follower) of perforated aluminum. The cladding damage has been limited to the tantalum region, and it is expected that reduction in the tantalum fraction in this segment will reduce or eliminate the potential for clad damage. The purpose of this research is to determine the extent to which the tantalum fraction can be reduced, without unacceptable impacts on core power shape. A two dimensional R-Z geometry model of the HFIR has been created for use in DORT for neutronics calculations. Weighted cross sections are generated using the SCALE package by running BONAMI, NITAWL, and XSDRN-PM. XSDRN is used for cell weighting and then region weighting of the cross sections. These cross sections are mixed into the necessary mixtures for use in DORT by GIF. These cross sections are used in the DORT R-Z model to generate flux distributions and eigenvalues for different combinations of tantalum concentration and blade position. An initial reference case has been run using the HFIRCE-3 (Critical Experiment Series 3) core to determine the bias introduced by simplifying the HFIR to an R-Z model. The modeled critical configuration yielded an eigenvalue of 0.9907. This value is used later as a value that represents criticality in the core. This bias is introduced by azimuthal asymmetries not included in the R-Z model, by approximations in cross section generation, and by various other discretization of continuous variables. Spatial power distributions, however, are not affected by the bias. A set of runs has been used to determine an approximate relationship between k-effective and blade position for a series of different tantalum concentrations. The current initial loading of 38 volume percent and the target loading of 30 volume percent are the focus of this study. For each concentration, k is calculated with the control blades at heights of 14 , 17.5 , 20 and 25 . These values are used to calculate an approximate critical blade position for each of the concentrations. The main concern with reduced tantalum loadings is an increase in power peaking in the core. This increase could occur because the control elements must be run closer together at beginning of cycle conditions, thus pinching the power shape with the large absorber regions closer together. The limiting thermal constraint is power peaking on the core exit of the inner fuel element. Peaking factors are calculated by dividing the fission neutron production rate in any interval by the average fission rate. Fission rates are assumed to be proportional to power. The results indicate that there is no difference in power shape between the initial loadings of 38 and 30 volume percent Ta. Such a modification is feasible since the peaking is not increased. A further decrease in loading would not be feasible, as peaking increases are observed for initial loadings below 25%

    eaucharistic fellowship --are Friends included?

    Full text link

    COMPARATIVE LAW - CASES AND MATERIALS. By Rudolf B. Schlesinger. Brooklyn: The Foundation Press, Inc., 1950

    Get PDF

    Assessing Input Brand Loyalty among U.S. Agricultural Producers

    Get PDF
    This study explores the prevalence and determinants of brand loyalty for agricultural input products. Results suggest that loyalty for both expendable and capital inputs is high among commercial agricultural producers in the United States. Producer attitudes, beliefs, and some demographic characteristics are useful identifiers of brand loyalty among commercial producers.brand loyalty, dealer loyalty, capital inputs, expendable inputs, farmer purchase decisions, Agricultural Finance, Consumer/Household Economics, Marketing, Q10, Q13, Q14,

    Assessing Agricultural Input Brand Loyalty Among U.S. Mid-Size and Commercial Producers

    Get PDF
    This study explores the prevalence and determinants of brand loyalty for agricultural input products. Results suggest that loyalty for both expendable and capital inputs is high among commercial farmers. Farmer attitudes, beliefs, and some demographic characteristics are useful identifiers of brand loyal farmers.brand loyalty, capital inputs, expendable inputs, farmer purchase decisions, Farm Management,

    Evidence Inference 2.0: More Data, Better Models

    Full text link
    How do we most effectively treat a disease or condition? Ideally, we could consult a database of evidence gleaned from clinical trials to answer such questions. Unfortunately, no such database exists; clinical trial results are instead disseminated primarily via lengthy natural language articles. Perusing all such articles would be prohibitively time-consuming for healthcare practitioners; they instead tend to depend on manually compiled systematic reviews of medical literature to inform care. NLP may speed this process up, and eventually facilitate immediate consult of published evidence. The Evidence Inference dataset was recently released to facilitate research toward this end. This task entails inferring the comparative performance of two treatments, with respect to a given outcome, from a particular article (describing a clinical trial) and identifying supporting evidence. For instance: Does this article report that chemotherapy performed better than surgery for five-year survival rates of operable cancers? In this paper, we collect additional annotations to expand the Evidence Inference dataset by 25\%, provide stronger baseline models, systematically inspect the errors that these make, and probe dataset quality. We also release an abstract only (as opposed to full-texts) version of the task for rapid model prototyping. The updated corpus, documentation, and code for new baselines and evaluations are available at http://evidence-inference.ebm-nlp.com/.Comment: Accepted as workshop paper into BioNLP Updated results from SciBERT to Biomed RoBERT

    Once Upon a Mattress (March 8-11, 2012)

    Get PDF
    Program for Once Upon a Mattress (March 8-11, 2012)
    corecore