10,762 research outputs found

    Towards Circular Economy for Steel - Assessing the Efficiency of Yellow Gypsum Synthesis from BOF Slags

    Get PDF
    The large quantities of basic oxygen furnace (BOF) slag produced at the Tata Steel Port Talbot steelworks has no existing recycling scope and has formed a large legacy “slag mountain” over the years. Closure of all Britain’s coal power plants by 2025 potentially could create a shortage of the supply of gypsum in the UK and elsewhere. A solution to the problem may lie in production of gypsum from a by-product of the steelmaking. This will afford a potential opportunity for commercialisation in Port Talbot. This research applies the findings of ‘A method of producing calcium sulphate from LD slag waste produced during the recovery of metallic iron from LD slag’ of which patent 572/KOL/2014 has been filed, to assess the efficiency of yellow gypsum synthesis from BOF slag, while determining the feasibility for commercialisation of this process at the Port Talbot steelworks. To provide this knowledge, an assessment of the chemical composition and particle size distribution of the BOF slag produced at the Port Talbot steelworks was undertaken, whilst developing methods to assess the efficiency of the process. X-ray fluorescence analysis was undertaken on the BOF slag samples acquired and synthetic yellow gypsum produced to determine the calcium conversion at the defined particle size distributions outlined in the thesis. Cost and market analysis were also undertaken to determine feasibility of commercialisation at the Port Talbot steelworks. This study, therefore confirmed that commercialisation of this process in the Port Talbot steelworks is feasible but would require large scale operation and further processing of the synthetic yellow gypsum produced. In addition, processing the synthetic yellow gypsum produced to products within the agriculture and construction industry would provide a higher valued final product

    Crustal Composition Beneath Southern Idaho: Insights from Teleseismic Receiver Functions

    Get PDF
    Receiver functions derived from teleseismic earthquakes contain seismic amplitude and velocity information that relate to compositional changes within the Earth’s crust and upper mantle. The receiver function waveform is a combination of P-S converted waves that have reverberated within the lithosphere. Although the largest seismic velocity boundary is found at the base of the crust, I explore the use of lower amplitude receiver function arrivals that represent smaller velocity contrasts within the crust. In my thesis, I calculate and model receiver functions via a Metropolis algorithm approach to extract seismic velocity distributions in the lithosphere. I use the results to explore changing lithologies and heat signatures beneath the geologically complex southern Idaho region. In addition to a robust crustal thickness estimate for my study area, I show anomalously thick crust beneath the 14 Ma track of the Yellowstone hotspot compared to the surrounding regions, a thinner crust beneath the Oregon-Idaho graben and the Basin and Range province, and a distinct boundary between the Basin and Range and middle Rocky Mountains provinces. I highlight a high velocity zone between 6-14 km depth that is consistent with the presence of mid-crustal sills beneath the hot spot track, partial melt within the Yellowstone caldera, and relatively low velocities at seismogenic depths within the tectonic parabola of eastern Idaho. Anomalously slow velocities in the lower to mid-crust beneath the southern margin of the western Snake River Plain are coincident with high heat flow values and high total magnetic values, offering the possibility of mid-lower crustal partially melted dikes or sill complexes. I utilize legacy active source refraction data to compare with receiver function results to further constrain seismic velocities. Overall, I find that receiver function analyses using a Metropolis algorithm inversion approach to estimate seismic velocity distributions show results below 6 km that are consistent with other studies. This approach offers the possibility of complimenting large-scale refraction experiments with low-cost receiver function analysis by utilizing earthquake waveforms from both permanent and temporary seismic deployments to constrain mid to lower-crustal properties. I discuss the use of this method as a tool for geothermal exploration by constraining crustal lithologies and identifying the presence of partial melt

    Electricity deregulation and the valuation of visibility loss in wilderness areas: A research note.

    Get PDF
    Visibility in most wilderness areas in the northeastern United States has declined substantially since the 1970s. As noted by Hill et al. (2000), despite the 1977 Clean Air Act and subsequent amendments, human induced smog conditions are becoming increasingly worse. Average visibility in class I airsheds, such as the Great Gulf Wilderness in New Hampshire’s White Mountains, is now about one-third of natural conditions. A particular concern is that deregulation of electricity production could result in further degradation because consumers may switch to lower cost fossil fuel generation (Harper 2000). To the extent that this system reduces electricity costs, it may also affect firm location decisions (Halstead and Deller 1997). Yet, little is known about the extent to which consumers are likely to make tradeoffs between electric bills and reduced visibility in nearby wilderness areas. This applied research uses a contingent valuation approach in an empirical case study of consumers’ tradeoffs between cheaper electric bills and reduced visibility in New Hampshire’s White Mountains. We also examine some of the problems associated with uncertainty with this type of analysis; that is, how confident respondents are in their answers to the valuation questions. Finally, policy implications of decreased visibility due to electricity deregulation are discussed

    The Effect of Occlusal Pressure on Vertical Force Generation on Dentate and Edentulous Subjects

    Get PDF
    Objectives: To determine 1) the difference in lower extremity power obtained from a vertical jump in individuals who are dentate or edentulous (implants or dentures) and 2) the difference occlusal pressure makes on lower body force output. Variables included jump height, maximum kicking force, maximum push off force, take off velocity, flight time, jump impulse, and maximum velocity. Groups included dentate, implant overdenture, and conventional denture. Methods: Counter movement jumping with force plate collection was utilized to test the three populations in this study. The three populations were dentate, mandibular implant overdenture opposing maxillary complete denture, and conventional maxillary and mandibular complete dentures. The dentate group was asked to clench their teeth at the beginning and throughout the counter movement jump. The edentulous groups were asked to clench with their prostheses in and to relax with them out. Results: The force variables tested showed significant differences between the three test groups. Maximum velocity was only showed significant between the dentate in comparison to the other two groups, but not between implant or conventional. There was found to be no significant difference between the force variables and occlusal pressure. Conclusions: In the present study, there was a significant difference found between the three dental condition groups of edentulous, implant, and dentate. The results indicate that the oral rehabilitation method did impact lower body force generation in all of the variables, with the exception of maximum velocity which only showed significance between edentulous and dentate

    Quantum Nescimus: Improving the characterization of quantum systems from limited information

    Get PDF
    We are currently approaching the point where quantum systems with 15 or more qubits will be controllable with high levels of coherence over long timescales. One of the fundamental problems that has been identified is that, as the number of qubits increases to these levels, there is currently no clear way to use efficiently the information that can be obtained from such a system to make diagnostic inferences and to enable improvements in the underlying quantum gates. Even with systems of only a few bits the exponential scaling in resources required by techniques such as quantum tomography or gate-set tomography will render these techniques impractical. Randomized benchmarking (RB) is a technique that will scale in a practical way with these increased system sizes. Although RB provides only a partial characterization of the quantum system, recent advances in the protocol and the interpretation of the results of such experiments confirm the information obtained as helpful in improving the control and verification of such processes. This thesis examines and extends the techniques of RB including practical analysis of systems affected by low frequency noise, extending techniques to allow the anisotropy of noise to be isolated, and showing how additional gates required for universal computation can be added to the protocol and thus benchmarked. Finally, it begins to explore the use of machine learning to aid in the ability to characterize, verify and validate noise in such systems, demonstrating by way of example how machine learning can be used to explore the edge between quantum non-locality and realism

    TINKERING WITH VALUATION ESTIMATES: IS THERE A FUTURE FOR WILLINGNESS TO ACCEPT MEASURES?

    Get PDF
    This paper examines various methods proposed in the literature to calibrate welfare measures, especially willingness to accept and willingness to pay, derived from contingent valuation surveys. Through simulation and a case study, we hope to provide guidance for empirical welfare measurement in response to the theoretical dispute regarding WTA/WTP disparities.Resource /Energy Economics and Policy,

    Concerns about Least Squares Estimation for the Three-Parameter Weibull Distribution: Case Study of Statistical Software

    Get PDF
    Least Squares estimation of the 2-parameter Weibull distribution is straightforward; however, there are multiple methods for least squares estimation of the 3-parameter Weibull. The third parameter for the 3-parameter Weibull distribution shifts the origin from 0 to some generally positive value sometimes called the location, threshold, or minimum life. The different methods used by the packages result in fairly major differences in the estimated parameters between the statistical packages. This may have implications for those needing to estimate or apply the results of a 3-parameter Weibull distribution that is used frequently in practice. The results are analyzed in detail based on an experimental design using pseudo-random Weibull data sets
    • 

    corecore