21,272 research outputs found
Quantitative behavior of non-integrable systems (III)
The main purpose of the paper is to give explicit geodesics and billiard
orbits in polysquares and polycubes that exhibit time-quantitative density. In
many instances of the 2-dimensional case concerning finite polysquares and
related systems, we can even establish a best possible form of
time-quantitative density called superdensity. In the more complicated
3-dimensional case concerning finite polycubes and related systems, we get very
close to this best possible form, missing only by an arbitrarily small margin.
We also study infinite flat dynamical systems, both periodic and aperiodic,
which include billiards in infinite polysquares and polycubes. In particular,
we can prove time-quantitative density even for aperiodic systems.Comment: 93 pages, 73 figure
Mirror Descent and Convex Optimization Problems With Non-Smooth Inequality Constraints
We consider the problem of minimization of a convex function on a simple set
with convex non-smooth inequality constraint and describe first-order methods
to solve such problems in different situations: smooth or non-smooth objective
function; convex or strongly convex objective and constraint; deterministic or
randomized information about the objective and constraint. We hope that it is
convenient for a reader to have all the methods for different settings in one
place. Described methods are based on Mirror Descent algorithm and switching
subgradient scheme. One of our focus is to propose, for the listed different
settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule.
This means that neither stepsize nor stopping rule require to know the
Lipschitz constant of the objective or constraint. We also construct Mirror
Descent for problems with objective function, which is not Lipschitz
continuous, e.g. is a quadratic function. Besides that, we address the problem
of recovering the solution of the dual problem
Real-time Loss Estimation for Instrumented Buildings
Motivation. A growing number of buildings have been instrumented to measure and record
earthquake motions and to transmit these records to seismic-network data centers to be archived and
disseminated for research purposes. At the same time, sensors are growing smaller, less expensive to
install, and capable of sensing and transmitting other environmental parameters in addition to
acceleration. Finally, recently developed performance-based earthquake engineering methodologies
employ structural-response information to estimate probabilistic repair costs, repair durations, and
other metrics of seismic performance. The opportunity presents itself therefore to combine these
developments into the capability to estimate automatically in near-real-time the probabilistic seismic
performance of an instrumented building, shortly after the cessation of strong motion. We refer to
this opportunity as (near-) real-time loss estimation (RTLE).
Methodology. This report presents a methodology for RTLE for instrumented buildings. Seismic
performance is to be measured in terms of probabilistic repair cost, precise location of likely physical
damage, operability, and life-safety. The methodology uses the instrument recordings and a Bayesian
state-estimation algorithm called a particle filter to estimate the probabilistic structural response of
the system, in terms of member forces and deformations. The structural response estimate is then
used as input to component fragility functions to estimate the probabilistic damage state of structural
and nonstructural components. The probabilistic damage state can be used to direct structural
engineers to likely locations of physical damage, even if they are concealed behind architectural
finishes. The damage state is used with construction cost-estimation principles to estimate
probabilistic repair cost. It is also used as input to a quantified, fuzzy-set version of the FEMA-356
performance-level descriptions to estimate probabilistic safety and operability levels.
CUREE demonstration building. The procedure for estimating damage locations, repair costs, and
post-earthquake safety and operability is illustrated in parallel demonstrations by CUREE and
Kajima research teams. The CUREE demonstration is performed using a real 1960s-era, 7-story, nonductile
reinforced-concrete moment-frame building located in Van Nuys, California. The building is
instrumented with 16 channels at five levels: ground level, floors 2, 3, 6, and the roof. We used the
records obtained after the 1994 Northridge earthquake to hindcast performance in that earthquake.
The building is analyzed in its condition prior to the 1994 Northridge Earthquake. It is found that,
while hindcasting of the overall system performance level was excellent, prediction of detailed damage
locations was poor, implying that either actual conditions differed substantially from those shown on
the structural drawings, or inappropriate fragility functions were employed, or both. We also found
that Bayesian updating of the structural model using observed structural response above the base of
the building adds little information to the performance prediction. The reason is probably that
Real-Time Loss Estimation for Instrumented Buildings
ii
structural uncertainties have only secondary effect on performance uncertainty, compared with the
uncertainty in assembly damageability as quantified by their fragility functions. The implication is
that real-time loss estimation is not sensitive to structural uncertainties (saving costly multiple
simulations of structural response), and that real-time loss estimation does not benefit significantly
from installing measuring instruments other than those at the base of the building.
Kajima demonstration building. The Kajima demonstration is performed using a real 1960s-era
office building in Kobe, Japan. The building, a 7-story reinforced-concrete shearwall building, was not
instrumented in the 1995 Kobe earthquake, so instrument recordings are simulated. The building is
analyzed in its condition prior to the earthquake. It is found that, while hindcasting of the overall
repair cost was excellent, prediction of detailed damage locations was poor, again implying either that
as-built conditions differ substantially from those shown on structural drawings, or that
inappropriate fragility functions were used, or both. We find that the parameters of the detailed
particle filter needed significant tuning, which would be impractical in actual application. Work is
needed to prescribe values of these parameters in general.
Opportunities for implementation and further research. Because much of the cost of applying
this RTLE algorithm results from the cost of instrumentation and the effort of setting up a structural
model, the readiest application would be to instrumented buildings whose structural models are
already available, and to apply the methodology to important facilities. It would be useful to study
under what conditions RTLE would be economically justified. Two other interesting possibilities for
further study are (1) to update performance using readily observable damage; and (2) to quantify the
value of information for expensive inspections, e.g., if one inspects a connection with a modeled 50%
failure probability and finds that the connect is undamaged, is it necessary to examine one with 10%
failure probability
Quantum dot occupation and electron dwell time in the cotunneling regime
We present comparative measurements of the charge occupation and conductance
of a GaAs/AlGaAs quantum dot. The dot charge is measured with a capacitively
coupled quantum point contact sensor. In the single-level Coulomb blockade
regime near equilibrium, charge and conductance signals are found to be
proportional to each other. We conclude that in this regime, the two signals
give equivalent information about the quantum dot system. Out of equilibrium,
we study the inelastic-cotunneling regime. We compare the measured differential
dot charge with an estimate assuming a dwell time of transmitted carriers on
the dot given by h/E, where E is the blockade energy of first-order tunneling.
The measured signal is of a similar magnitude as the estimate, compatible with
a picture of cotunneling as transmission through a virtual intermediate state
with a short lifetime
A probabilistic approach to emission-line galaxy classification
We invoke a Gaussian mixture model (GMM) to jointly analyse two traditional
emission-line classification schemes of galaxy ionization sources: the
Baldwin-Phillips-Terlevich (BPT) and vs. [NII]/H
(WHAN) diagrams, using spectroscopic data from the Sloan Digital Sky Survey
Data Release 7 and SEAGal/STARLIGHT datasets. We apply a GMM to empirically
define classes of galaxies in a three-dimensional space spanned by the
[OIII]/H, [NII]/H, and EW(H), optical
parameters. The best-fit GMM based on several statistical criteria suggests a
solution around four Gaussian components (GCs), which are capable to explain up
to 97 per cent of the data variance. Using elements of information theory, we
compare each GC to their respective astronomical counterpart. GC1 and GC4 are
associated with star-forming galaxies, suggesting the need to define a new
starburst subgroup. GC2 is associated with BPT's Active Galaxy Nuclei (AGN)
class and WHAN's weak AGN class. GC3 is associated with BPT's composite class
and WHAN's strong AGN class. Conversely, there is no statistical evidence --
based on four GCs -- for the existence of a Seyfert/LINER dichotomy in our
sample. Notwithstanding, the inclusion of an additional GC5 unravels it. The
GC5 appears associated to the LINER and Passive galaxies on the BPT and WHAN
diagrams respectively. Subtleties aside, we demonstrate the potential of our
methodology to recover/unravel different objects inside the wilderness of
astronomical datasets, without lacking the ability to convey physically
interpretable results. The probabilistic classifications from the GMM analysis
are publicly available within the COINtoolbox
(https://cointoolbox.github.io/GMM\_Catalogue/).Comment: Accepted for publication in MNRA
LittleDarwin: a Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems
Mutation testing is a well-studied method for increasing the quality of a
test suite. We designed LittleDarwin as a mutation testing framework able to
cope with large and complex Java software systems, while still being easily
extensible with new experimental components. LittleDarwin addresses two
existing problems in the domain of mutation testing: having a tool able to work
within an industrial setting, and yet, be open to extension for cutting edge
techniques provided by academia. LittleDarwin already offers higher-order
mutation, null type mutants, mutant sampling, manual mutation, and mutant
subsumption analysis. There is no tool today available with all these features
that is able to work with typical industrial software systems.Comment: Pre-proceedings of the 7th IPM International Conference on
Fundamentals of Software Engineerin
- …