38,683 research outputs found
Experimental determination of cosmic ray charged particle intensity profiles in the atmosphere
Absolute cosmic-ray free air ionization and charged particle fluxes and dose rates throughout the atmosphere were measured on a series of balloon flights that commenced in 1968. Argon-filled ionization chambers equipped with solid-state electrometers, with different gas pressures and steel wall thicknesses, and a pair of aluminum-wall Gm counters have provided the basic data. These data are supplemented by measurements with air-filled and tissue equivalent ionization chambers and a scintillation spectrometer. Laboratory experiments together with analyses of the theoretical aspects of the detector responses to cosmic radiation indicate that these profiles can be determined to an overall accuracy of + or - 5 percent
Phase II of the ASCE Benchmark Study on SHM
The task group on structural health monitoring of the Dynamic Committee of ASCE was formed in
1999 at the 12
th
Engineering Mechanics Conference. The task group has designed a number of analytical
studies on a benchmark structure and there are plans to follow these with an experimental program. The
first phase of the analytical studies was completed in 2001. The second phase, initiated in the summer of
2001, was formulated in the light of the experience gained on phase I and focuses on increasing realism in
the simulation of the discrepancies between the actual structure and the mathematical model used in the
analysis. This paper describes the rational that lead the SHM task group to the definition of phase II and
presents the details of the cases that are being considered
Infrared Emission from Clusters in the Starforming Disk of He2-10
We have made subarcsecond-resolution images of the central 10" of the
Wolf-Rayet dwarf galaxy He 2-10 at 11.7 microns, using the Long Wavelength
Spectrometer on the Keck Telescope. The spatial distribution of the infrared
emission roughly agrees with that of the rising spectrum radio sources seen by
Kobulnicky & Johnson (1999) and confirms that those sources are compact HII
regions rather than SNR or other objects. The infrared sources are more
extended than the subarcsecond rising spectrum radio sources, although the
entire complex is still less than 5" in extent. On sizescales of 1" the
infrared and radio emission are in excellent agreement, with each source
requiring several hundred to a thousand O stars for excitation. The nebulae lie
in a flattened disk-like distribution about 240 by 100 pc and provide all of
the flux measured by IRAS for the entire galaxy in the 12 micron band; 30% of
the total IRAS flux from the galaxy emanates from one 15-30 pc source. In this
galaxy, intense star formation, probably triggered by an accretion event, is
confined to a central disk which breaks up into distinct nebulae which
presumably mark the sites of young super star clusters.Comment: Accepted for Publication in the Astronomical Journa
Coulomb Charging Effects for Finite Channel Number
We consider quantum fluctuations of the charge on a small metallic grain
caused by virtual electron tunneling to a nearby electrode. The average
electron number and the effective charging energy are determined by means of
perturbation theory in the tunneling Hamiltonian. In particular we discuss the
dependence of charging effects on the number N of tunneling channels. Earlier
results for N>>1 are found to be approached rather rapidly with increasing N.Comment: 6 pages, 5 figure
Prospective cognitions in anxiety and depression: Replication and methodological extension
The present study presents a replication and methodological extension of MacLeod, Tata, Kentish, and Jacobsen (1997) with a nonclinical sample, using future-directed imagery to assess prospective cognitions. Results showed that only anxiety (but not depression) was related to enhanced imagery for future negative events. Both anxiety and depression showed significant zero-order correlations with reduced imagery for future positive events. However, when the overlap between anxiety and depression was controlled for, only depression (but not anxiety) showed a unique association with reduced imagery for positive events. Implications of these findings for cognitive models of anxiety and depression are discussed
Application of A Distributed Nucleus Approximation In Grid Based Minimization of the Kohn-Sham Energy Functional
In the distributed nucleus approximation we represent the singular nucleus as
smeared over a smallportion of a Cartesian grid. Delocalizing the nucleus
allows us to solve the Poisson equation for theoverall electrostatic potential
using a linear scaling multigrid algorithm.This work is done in the context of
minimizing the Kohn-Sham energy functionaldirectly in real space with a
multiscale approach. The efficacy of the approximation is illustrated
bylocating the ground state density of simple one electron atoms and
moleculesand more complicated multiorbital systems.Comment: Submitted to JCP (July 1, 1995 Issue), latex, 27pages, 2figure
Joint Emotion Analysis via Multi-task Gaussian Processes
We propose a model for jointly predicting
multiple emotions in natural language sentences.
Our model is based on a low-rank
coregionalisation approach, which combines
a vector-valued Gaussian Process
with a rich parameterisation scheme. We
show that our approach is able to learn
correlations and anti-correlations between
emotions on a news headlines dataset. The
proposed model outperforms both singletask
baselines and other multi-task approaches
Distributed-Pair Programming can work well and is not just Distributed Pair-Programming
Background: Distributed Pair Programming can be performed via screensharing
or via a distributed IDE. The latter offers the freedom of concurrent editing
(which may be helpful or damaging) and has even more awareness deficits than
screen sharing. Objective: Characterize how competent distributed pair
programmers may handle this additional freedom and these additional awareness
deficits and characterize the impacts on the pair programming process. Method:
A revelatory case study, based on direct observation of a single, highly
competent distributed pair of industrial software developers during a 3-day
collaboration. We use recordings of these sessions and conceptualize the
phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits
without visible obstruction of the overall process. 2. Skilled pairs may use
the additional editing freedom in a useful limited fashion, resulting in
potentially better fluency of the process than local pair programming.
Conclusion: When applied skillfully in an appropriate context, distributed-pair
programming can (not will!) work at least as well as local pair programming
Simulation of an 1857-like Mw 7.9 San Andreas Fault Earthquake and the Response of Tall Steel Moment Frame Buildings in Southern California – A Prototype Study
In 1857, an earthquake of magnitude 7.9 occurred on the San Andreas fault, starting at Parkfield and rupturing
in a southeasterly direction for more than 360 km. Such a unilateral rupture produces significant directivity
toward the San Fernando and Los Angeles basins. The strong shaking in the basins due to this earthquake
would have had significant long-period content (2-8 s), and the objective of this study is to quantify the impact
of such an earthquake on two 18-story steel moment frame building models, hypothetically located at 636 sites
on a 3.5 km grid in southern California. End-to-end simulations include modeling the source and rupture of a
fault at one end, numerically propagating the seismic waves through the earth structure, simulating the damage
to engineered structures and estimating the economic impact at the other end using high-performance computing.
In this prototype study, we use an inferred finite source model of the magnitude 7.9, 2002 Denali fault
earthquake in Alaska, and map it onto the San Andreas fault with the rupture originating at Parkfield and
propagating southward over a distance of 290 km. Using the spectral element seismic wave propagation code,
SPECFEM3D, we simulate an 1857-like earthquake on the San Andreas fault and compute ground motions at
the 636 analysis sites. Using the nonlinear structural analysis program, FRAME3D, we subsequently analyze
3-D structural models of an existing tall steel building designed using the 1982 Uniform Building Code (UBC),
as well as one designed according to the 1997 UBC, subjected to the computed ground motion at each of these
sites. We summarize the performance of these structural models on contour maps of peak interstory drift.
We then perform an economic loss analysis for the two buildings at each site, using the Matlab Damage and
Loss Analysis (MDLA) toolbox developed to implement the PEER loss-estimation methodology. The toolbox
includes damage prediction and repair cost estimation for structural and non-structural components and allows
for the computation of the mean and variance of building repair costs conditional on engineering demand
parameters (i.e. inter-story drift ratios and peak floor accelerations). Here, we modify it to treat steel-frame
high-rises, including aspects such as mechanical, electrical and plumbing systems, traction elevators, and the
possibility of irreparable structural damage. We then generate contour plots of conditional mean losses for the
San Fernando and the Los Angeles basins for the pre-Northridge and modern code-designed buildings, allowing
for comparison of the economic effects of the updated code for the scenario event. In principle, by simulating
multiple seismic events, consistent with the probabilistic seismic hazard for a building site, the same basic
approach could be used to quantify the uncertain losses from future earthquakes
- …