2,716 research outputs found
Recommended from our members
Search for lepton flavour violation in the eĪ¼ continuum with the ATLAS detector in ās = 7 TeV pp collisions at the LHC
This paper presents a search for the t-channel exchange of an R-parity violating scalar top quark (t) in the e^Ā± Ī¼^ā continuum using 2.1 fb^(ā1) of data collected by the ATLAS detector in ās = 7 TeV pp collisions at the Large Hadron Collider. Data are found to be consistent with the expectation from the Standard Model backgrounds. Limits on R-parity-violating couplings at 95 % C.L. are calculated as a function of the scalar top mass (mt). The upper limits on the production cross section for pp ā eĪ¼X, through the t-channel exchange of a scalar top quark, ranges from 170 fb for m_t=95 GeV to 30 fb for m_t=1000 GeV
Theoretical Calculation of the Power Spectra of the Rolling and Yawing Moments on a Wing in Random Turbulence
The correlation functions and power spectra of the rolling and yawing moments on an airplane wing due to the three components of continuous random turbulence are calculated. The rolling moments to the longitudinal (horizontal) and normal (vertical) components depend on the spanwise distributions of instantaneous gust intensity, which are taken into account by using the inherent properties of symmetry of isotropic turbulence. The results consist of expressions for correlation functions or spectra of the rolling moment in terms of the point correlation functions of the two components of turbulence. Specific numerical calculations are made for a pair of correlation functions given by simple analytic expressions which fit available experimental data quite well. Calculations are made for four lift distributions. Comparison is made with the results of previous analyses which assumed random turbulence along the flight path and linear variations of gust velocity across the span
Solid Surface Combustion Experiment Yields Significant Observations
The spread of a flame over solid fuel is not only a fundamental textbook combustion phenomenon, but also the central element of destructive fires that cause tragic loss of life and property each year. Throughout history, practical measures to prevent and fight fires have been developed, but these have often been based on lessons learned in a costly fire. Since the 1960 s, scientists and engineers have employed powerful tools of scientific research to understand the details of flame spread and how a material can be rendered nonflammable. High-speed computers have enabled complex flame simulations, whereasand lasers have provided measurements of the chemical composition, temperature, and air velocities inside flames. The microgravity environment has emerged as the third great tool for these studies. Spreading flames are complex combinations of chemical reactions and several physical processes including the transport of oxygen and fuel vapor to the flame and the transfer of heat from the flame to fresh fuel and to the surroundings. Depending on its speed, air motion in the vicinity of the flame can affect the flame in substantially different ways. For example, consider the difference between blowing on a campfire and blowing out a match. On Earth, gravity induces air motion because of buoyancy (the familiar rising hot gases); this process cannot be controlled experimentally. For theoreticians, buoyant air motion complicates the problem modeling of flame spread beyond the capacity of modern computers to simulate. The microgravity environment provides experimental control of air motion near spreading flames, with results that can be compared with detailed theory. The Solid Surface Combustion Experiment (SSCE) was designed to obtain benchmark flame spreading data in quiescent test atmospheres--the limiting case of flames spreading. Professor Robert Altenkirch, Vice President for Research at Mississippi State University, proposed the experiment concept, and the NASA Lewis Research Center designed, built, and tested the SSCE hardware. It was the first microgravity science experiment built by Lewis for the space shuttle and the first combustion science experiment flown in space
Recommended from our members
Measurement of dijet production with a veto on additional central jet activity in pp collisions at ās = 7TeV using the ATLAS detector
A measurement of jet activity in the rapidity interval bounded by a dijet system is presented. Events are vetoed if a jet with transverse momentum greater than 20 GeV is found between the two boundary jets. The fraction of dijet events that survive the jet veto is presented for boundary jets that are separated by up to six units of rapidity and with mean transverse momentum 50ā<āp_Tā<ā500 GeV. The mean multiplicity of jets above the veto scale in the rapidity interval bounded by the dijet system is also presented as an alternative method for quantifying perturbative QCD emission. The data are compared to a next-to-leading order plus parton shower prediction from the powheg-box, an all-order resummation using the hej calculation and the pythia, herwig++ and alpgen event generators. The measurement was performed using pp collisions at ās=7 TeV using data recorded by the ATLAS detector in 2010
Modelling outburst floods from moraine-dammed glacial lakes
In response to climatic change, the size and number of moraine-dammed supraglacial and proglacial lake systems have increased dramatically in recent decades. Given an appropriate trigger, the natural moraine dams that impound these proglacial lakes are breached, producing catastrophic Glacial Lake Outburst Floods (GLOFs). These floods are highly complex phenomena, with flood characteristics controlled, in the first instance, by the style of breach formation. Downstream, GLOFs typically exhibit transient, often non-Newtonian fluid dynamics as a result of high rates of sediment entrainment from the dam structure and channel boundaries. Combined, these characteristics introduce numerous modelling challenges. In this review, the historical, contemporary and emerging approaches available to model the individual stages, or components, of a GLOF event are introduced and discussed.
A number of methods exist to model the stages of a GLOF event. Dam-breach models can be categorised as being empirical, analytical or numerical in nature, with each method having significant advantages and shortcomings. Empirical relationships that produce estimates of peak discharge and time to peak are straightforward to implement, but the applicability of these models is often limited by the nature of the case study data from which they are derived. Furthermore, empirical models neglect the inclusion of basic hydraulic principles that describe the mechanics of breach formation. Analytical or parametric models simulate breach development using simplified versions of the physically based equations that describe breach enlargement, whilst complex, physically-based codes represent the state-of-the-art in numerical dam-breach modelling. To date, few of the latter have been applied to investigate the moraine-dam failure problem.
Despite significant advances in the physical complexity and availability of higher-order hydrodynamic solvers, the majority of published accounts that have attempted to reconstruct or predict GLOF characteristics have been limited, often by necessity, to the use of relatively simplistic models. This is in part attributable to the unavailability of terrain models of many high-mountain catchments at the fine spatial resolutions required for the effective application of numerically-sophisticated codes, and their proprietary (and often cost-prohibitive) nature. However, advanced models are experiencing increasing use in the glacial hazards literature. In particular, the suitability of emerging mesh-free, particle-based methods for simulating dam-breach and GLOF routing may represent a solution to many of the challenges associated with modelling this complex phenomenon.
Sources of uncertainty in the GLOF modelling chain have been identified by various workers. However, to date their significance for the robustness of reconstructive and predictive modelling efforts have been largely unexplored and quantified in detail. These sources include the geometric and material characterisation of moraine dam complexes, including lake bathymetry and the presence and extent of buried ice, initial conditions (freeboard, precise spillway dimensions), spatial discretisation of the down-valley domain, hydrodynamic model dimensionality and the dynamic coupling of successive components in the GLOF model cascade
Augustine State: The Information Technology Component of Going Online
Finding cases appropriate for teaching information technology courses is a persistent problem for instructors. This article is written as a teaching case and is intended for use by instructors in educating students about information technology. The case information is factual and deals with genuine information technology situations at a real-world educational institution. The case offers the opportunity to generate student discussion in the areas of corporate and IT governance, IT infrastructure, IT architecture, open source software, outsourcing and other topics
Evaluating Software Development: A Case Study with Pasture Land Management (PLMS) Grazing Software
A process for evaluating and improving public domain software is presented for agents and faculty who author software and Web-based training. Extension, education, and conservation employees participated in workshops to learn about a Pasture Land Management System software program that enables farmers to experiment with alternative grazing methods. Users were questioned at initial workshop training and again 6 months later. The workshop evaluation showed concern about the software complexity. The follow-up questionnaire revealed the respondents\u27 priorities for technical improvements. The authors used the participants\u27 feedback to evaluate existing problems and prioritize improvements in the usability and functionality of the software
- ā¦