1,843 research outputs found
Using GIS to Quantify Patterns of Glacial Erosion on Northwest Iceland: Implications for Independent Ice Sheets
Glacial erosion patterns on northwest Iccliind are quantified using a Geographic Information System (GIS) in order to interpret subglacial characteristics of part of northwest Iceland affected by ice sheet glaciation. Ice scour lake density is used as a proxy for glacial erosion. Erosion classes are interpreted from variations in the density of lake basins. Lake density was calculated using two dilTerent methods: the first is sensitive to the total number of lakes in a specific area, and the second is sensitive to total lake area in a specific area. Both of these methods result in a value for lake density, and the results for lake density calculated using the two methods are similar. Areas with the highest density of lakes are interpreted as areas with the most intense erosion with the exception of alpine regions. The highest density of lakes in the study area exceeds 8% and is located on upland plateaus where mean elevations range from 400 to 800 m a.s.l. Low lake density (0-2%) is observed in steep alpine areas where steep topography does not favor lake development. The G!S analysis is combined with geomorphic mapping to provide ground truth for the GIS interpretations and to locate paleo-ice flow indicators and landforms. The patterns identified in this study illustrate distinct regions of glacial erosion and flow paths that are best explained by two independent ice sheets covering northwest Iceland during the Last Glacial Maximum (LGM). Areas of alpine glacial landforms and the presence of nunataks within the glaciated region support interpretations that Ice-free regions or cold-based ice cover existed on parts of northwest Iceland during the LGM. The methods developed in this study are easily transferable to other formerly glaciated regions and provide tools to evaluate subglacial properties of former ice sheets. The data generated yield important subglacial boundary conditions for ice sheet models of Iceland
Beckwith Waidemann Syndrome
A five days old 3.3 kg infant, preterm male was admitted at the neonatal ward MNH five hours after birth from a peripheral hospital for further management of congenital hernia and general care. Examinations revealed macroglosia, earlobe crease, large exomphalous, small penis, hypospadius and cryptochidism. Investigations portrayed neonatal hypoglycaemia and distended bowel by x-ray. The ultimate diagnosis reached was Beckwith Waidemann Syndrome. The management for such a condition is mainly supportive to maintain euglycaemia, surgical intervention when indicated followed by monitoring the possibility of embryonal neoplasm later by ultrasonography surveillance
Why Pharmaceutical Firms Support Patent Trolls: The Disparate Impact of Ebay v. Mercexchange on Innovation
Before the unanimous decision in eBay v. MercExchange, patent holders were almost always granted an injunction against an infringer. In fact, the Federal Circuit, in deciding eBay, noted that, upon a finding of infringement, an injunction would issue unless there were extraordinary circumstances. The Court, in a brief opinion, disagreed with the Federal Circuit and explained that the injunction issue in a patent case must be analyzed under the traditional four-factor test.[...] Is the four-factor test fairer or better than the Federal Circuit\u27s near-automatic injunction rule? It is certainly more difficult to administer a factor test as compared to a bright-line rule. On the other hand, district courts undertake this type of inquiry all the time, although the inquiry is not usually in the context of the complex world of patents and incentives to innovate. At least one member of the Supreme Court, Justice Kennedy, seemed to believe that this change would primarily affect patent trolls. Of course this statement expressly ignores the opinion of Justice Thomas who noted that special consideration might be appropriate when the patentee is a university or small inventor
Prudence in International Strategy: From Lawyerly to Post-Lawyerly
The quality of prudence has long been associated with lawyers, including when they have served in quite broad capacities. Indeed, for an important period in 20th century U.S. history, one of the most distinctive contributions of a certain type of lawyer, referred to here in shorthand as “the New York lawyer-statesman,” was the application of prudence not to the practice of law as such, but to the broader domain of U.S. international strategy and the exercise of the broader quality of “lawyerly prudence.”Drawing on a few discrete chapters in U.S. history, this article recounts and interprets the high-water mark of this New York lawyer-statesmen tradition around WWI and WWII, while also arguing that the circumstances that shaped, allowed, and even encouraged such contributions to international strategy by this type of lawyer during the first two-thirds of the 20th century had largely run their course by the final third of “the American century.” This leaves a pressing puzzle to be solved. On the one hand, international strategy remains as much in need of prudence now as ever before—arguably more so because of the absence from the scene for the last couple of generations of the lawyerly prudent type. But because New York lawyers can no longer serve as the primary exemplars of lawyerly prudence in this context, we must now turn elsewhere for reliable sources of prudence. The article concludes that to do so we now have no choice but to unpack the elements of the old lawyerly prudence and encourage their self-conscious adoption by a broader group of citizen-statespeople who have the right kind of intellectual training and real-world experience to be able to develop and exercise prudence of the old lawyerly kind, even though many or most of such people will not be lawyers
LEAN FIRE MANAGEMENT: A FOCUSED ANALYSIS OF THE INCIDENT COMMAND SYSTEM BASED ON TOYOTA PRODUCTION SYSTEM PRINCIPLES
A primary role of the Incident Command System is to learn from past incidents, as illustrated by its origins in the wildland firefighting community. Successful emergency response operations under the Incident Command System has prompted its nationwide spread, this promulgation critically relies on the system’s capability to stabilize and continuously improve various aspects of emergency response through effective organizational learning. The objective of this study is to evaluate the potential to apply fundamental principles of the Toyota Production System (Lean manufacturing) to improve learning effectiveness within the Incident Command System. An in-depth review of literature and training documents regarding both systems revealed common goals and functional similarities, including the importance of continuous improvement. While these similarities point to the validity of applying Lean principles to the Incident Command System, a focus on the systematic learning function of the Incident Command System culminated in the discovery of gaps in approaches proposed by the Incident Command System framework. As a result, recommendations are made for adjustments in systematic problem solving to adapt Lean principles of root cause analysis and emphasis on standardization of successful countermeasures to benefit the system. Future recommendations are also proposed based on the author’s understanding of the system
An Object-Oriented Event Calculus
Despite the rising popularity and usefulness of events, or implicit invocation, in software design, the availability of general-purpose event mechanisms are rare. Further, most event mechanisms available for software design are implemented as libraries or sets of macros that are constrained by the language in which they are used; making such mechanisms inconvenient to use as well as error-prone. Event mechanisms that are part of a programming language can do away with such constraints; thus making events easier to use. However, there a few languages that offer built-in the events and even fewer languages that have a built-in general-purpose event mechanism. In order to promote the study of implicit invocation programming languages, this thesis presents a formal programming language foundation for events. This thesis expands the, object based, imps-caluculus to create a calculus for objects and events, the rws-calculus. The rws-calculus has a formal syntax, semantics, and a sound type system that is useful for defining practical programming languages that include built-in events. This, along with the ability of the calculus to easily simulate many different event mechanisms makes it a good start toward a formal understanding of implicit invocation
A Fixed-point Method for Computing Steady-state 2D Laser-Fluid Interactions
This research introduces a fixed-point numerical approach for solving the steady-state Navier-Stokes (NS) equations on a finite two-dimensional (2D) domain. The steady-state interaction between a high energy laser beam and its surrounding fluid medium is important to researchers in the field of high energy laser beam propagation. The solutions to the steady-state Navier-Stokes equations provide a model for uncovering the steady-state behavior of the fluid medium, which is useful for the modeling of thermal blooming in laser beam propagation. Numerical solutions remain the only tenable option for solving the NS equations, wherein numerical speed and fidelity beget the utility of any such algorithm. The timing and accuracy results from the novel fixed-point algorithm are compared to a standard Newton solver, where the fixed-point algorithm implements a series of discrete Poisson solvers through successive fixed-point iterations of fluid velocity (u,v), pressure (p), and temperature (T) in a Boussinesq fluid model. The fixed-point scheme consistently proves superior in computational cost by converging after O(N2 log N2 ) flops compared to the O(N6) flops in the Newton Solver for a discrete N x N grid. We provide a proof for the convergence of small amplitude solutions, and discuss the relationship between fluid parameters (Re, Ri, Pe) and the existence of solutions as a function of laser intensity in a bifurcation analysis
The Role of Black Hole Feedback on Size and Structural Evolution in Massive Galaxies
We use cosmological hydrodynamical simulations to investigate the role of
feedback from accreting black holes on the evolution of sizes, compactness,
stellar core density and specific star-formation of massive galaxies with
stellar masses of . We perform two sets of
cosmological zoom-in simulations of 30 halos to z=0: (1) without black holes
and Active Galactic Nucleus (AGN) feedback and (2) with AGN feedback arising
from winds and X-ray radiation. We find that AGN feedback can alter the stellar
density distribution, reduce the core density within the central 1 kpc by 0.3
dex from z=1, and enhance the size growth of massive galaxies. We also find
that galaxies simulated with AGN feedback evolve along similar tracks to those
characterized by observations in specific star formation versus compactness. We
confirm that AGN feedback plays an important role in transforming galaxies from
blue compact galaxies into red extended galaxies in two ways: (1) it
effectively quenches the star formation, transforming blue compact galaxies
into compact quiescent galaxies and (2) it also removes and prevents new
accretion of cold gas, shutting down in-situ star formation and causing
subsequent mergers to be gas-poor or mixed. Gas poor minor mergers then build
up an extended stellar envelope. AGN feedback also puffs up the central region
through the fast AGN driven winds as well as the slow expulsion of gas while
the black hole is quiescent. Without AGN feedback, large amounts of gas
accumulate in the central region, triggering star formation and leading to
overly massive blue galaxies with dense stellar cores.Comment: 13 pages, 7 figures, Accepted for publication in Ap
AGN feedback in an isolated elliptical galaxy: the effect of strong radiative feedback in the kinetic mode
Based on two-dimensional high resolution hydrodynamic numerical simulation,
we study the mechanical and radiative feedback effects from the central AGN on
the cosmological evolution of an isolated elliptical galaxy. Physical processes
such as star formation and supernovae are considered. The inner boundary of the
simulation domain is carefully chosen so that the fiducial Bondi radius is
resolved and the accretion rate of the black hole is determined
self-consistently. In analogy to previous works, we assume that the specific
angular momentum of the galaxy is low. It is well-known that when the accretion
rates are high and low, the central AGNs will be in cold and hot accretion
modes, which correspond to the radiative and kinetic feedback modes,
respectively. The emitted spectrum from the hot accretion flows is harder than
that from the cold accretion flows, which could result in a higher Compton
temperature accompanied by a more efficient radiative heating, according to
previous theoretical works. Such a difference of the Compton temperature
between the two feedback modes, the focus of this study, has been neglected in
previous works. Significant differences in the kinetic feedback mode are found
as a result of the stronger Compton heating and accretion becomes more chaotic.
More importantly, if we constrain models to correctly predict black hole growth
and AGN duty cycle after cosmological evolution, we find that the favored model
parameters are constrained: mechanical feedback efficiency diminishes with
decreasing luminosity (the maximum efficiency being ) and
X-ray Compton temperature increases with decreasing luminosity, although models
with fixed mechanical efficiency and Compton temperature can be found that are
satisfactory as well. We conclude that radiative feedback in the kinetic mode
is much more important than previously thought.Comment: 35 pages, 7 figures, accepted by the Ap
- …