96 research outputs found
Modeling the X-ray - UV Correlations in NGC 7469
We model the correlated X-ray - UV observations of NGC 7469, for which well
sampled data in both these bands have been obtained recently in a
multiwavelength monitoring campaign. To this end we derive the transfer
function in wavelength \ls and time lag \t, for reprocessing hard (X-ray)
photons from a point source to softer ones (UV-optical) by an infinite plane
(representing a cool, thin accretion disk) located at a given distance below
the X-ray source, under the assumption that the X-ray flux is absorbed and
emitted locally by the disk as a black body of temperature appropriate to the
incident flux. Using the observed X-ray light curve as input we have computed
the expected continuum UV emission as a function of time at several wavelengths
(\l \l 1315 \AA, \l \l 6962 \AA, \l \l 15000 \AA, \l \l 30000 \AA) assuming
that the X-ray source is located one \sc radius above the disk plane, with the
mass of the black hole and the latitude angle of the observer
relative to the disk plane as free parameters. We have searched the parameter
space of black hole masses and observer azimuthal angles but we were unable to
reproduce UV light curves which would resemble, even remotely, those observed.
We also explored whether particular combinations of the values of these
parameters could lead to light curves whose statistical properties (i.e. the
autocorrelation and cross correlation functions) would match those
corresponding to the observed UV light curve at \l \l 1315 \AA. Even though we
considered black hole masses as large as M no such match was
possible. Our results indicate that some of the fundamental assumptions of this
model will have to be modified to obtain even approximate agreement between the
observed and model X-ray - UV light curves.Comment: 16 pages, 13 figures, ApJ in pres
A portfolio approach to massively parallel Bayesian optimization
One way to reduce the time of conducting optimization studies is to evaluate
designs in parallel rather than just one-at-a-time. For expensive-to-evaluate
black-boxes, batch versions of Bayesian optimization have been proposed. They
work by building a surrogate model of the black-box that can be used to select
the designs to evaluate efficiently via an infill criterion. Still, with higher
levels of parallelization becoming available, the strategies that work for a
few tens of parallel evaluations become limiting, in particular due to the
complexity of selecting more evaluations. It is even more crucial when the
black-box is noisy, necessitating more evaluations as well as repeating
experiments. Here we propose a scalable strategy that can keep up with massive
batching natively, focused on the exploration/exploitation trade-off and a
portfolio allocation. We compare the approach with related methods on
deterministic and noisy functions, for mono and multiobjective optimization
tasks. These experiments show similar or better performance than existing
methods, while being orders of magnitude faster
Strategies for the adaptive reuse of large-scale buildings
Thesis (M. Arch.)--Massachusetts Institute of Technology, Dept. of Architecture; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2006.Some pages folded.Includes bibliographical references.The practice of adaptive reuse has grown in popularity in the United States over the past few decades, with now about 90% of architect-commissioned work involving some interaction with an existing structure. While the practice of reuse has existed informally in the form of garage-as-guest house or barn-as-garage conversions and so on, it is only since the late 1960s that architects and engineers have begun to approach it critically, as a design problem. It is often lauded for fostering the development of a sustainable built environment, however, it has its unique challenges. This thesis traces a brief history of the designer's role in the sustainable development discourse, with focused attention paid to the adaptive reuse solution. Furthermore, it attempts to identify the challenges and discuss how they each pertain to the architect, the preservationist, and the engineer. Through the examination of reuse case studies, a coarse classification of project typologies. The second portion of the thesis tackles a specific reuse problem in the Old Post Office in Chicago, Illinois. The Post Office was selected because of its heavily planned context, its historical and cultural significance, the real interest that has been expressed in its reuse, and its size.(cont.) The thesis builds on the earlier classification system to propose an integrated strategy with which to approach the redevelopment of the building. The final part of the thesis briefly describes a few environmental evaluation methods that might be used to judge the sustainability of the reuse project. The proposed solution is analysed to see if the design decisions made with environmental sustainability in mind can be quantified.by Dana Ozik.S.M.M.Arch
Evolution of Discrete Dynamical Systems
We investigate the evolution of three different types of discrete dynamical systems. In each case simple local rules are shown to yield interesting collective global behavior.
(a) We introduce a mechanism for the evolution of growing small world networks. We demonstrate that purely local connection rules, when coupled with network growth, can result in short path lengths for the network as a whole.
(b) We consider the general character of the spatial distributions of populations that grow through reproduction and subsequent local resettlement of new population members. Several simple one and two-dimensional point placement models are presented to illustrate possible generic behavior of these distributions. We show, both numerically and analytically, that all of the models lead to multifractal spatial distributions of population.
(c) We present a discrete lattice model to investigate the segregation of three species granular mixtures in horizontally rotating cylinders. We demonstrate that the simple local rules of the model are able to reproduce many of the experimentally observed global phenomena
Characterization and valuation of uncertainty of calibrated parameters in stochastic decision models
We evaluated the implications of different approaches to characterize
uncertainty of calibrated parameters of stochastic decision models (DMs) in the
quantified value of such uncertainty in decision making. We used a
microsimulation DM of colorectal cancer (CRC) screening to conduct a
cost-effectiveness analysis (CEA) of a 10-year colonoscopy screening. We
calibrated the natural history model of CRC to epidemiological data with
different degrees of uncertainty and obtained the joint posterior distribution
of the parameters using a Bayesian approach. We conducted a probabilistic
sensitivity analysis (PSA) on all the model parameters with different
characterizations of uncertainty of the calibrated parameters and estimated the
value of uncertainty of the different characterizations with a value of
information analysis. All analyses were conducted using high performance
computing resources running the Extreme-scale Model Exploration with Swift
(EMEWS) framework. The posterior distribution had high correlation among some
parameters. The parameters of the Weibull hazard function for the age of onset
of adenomas had the highest posterior correlation of -0.958. Considering full
posterior distributions and the maximum-a-posteriori estimate of the calibrated
parameters, there is little difference on the spread of the distribution of the
CEA outcomes with a similar expected value of perfect information (EVPI) of
\$653 and \$685, respectively, at a WTP of \$66,000/QALY. Ignoring correlation
on the posterior distribution of the calibrated parameters, produced the widest
distribution of CEA outcomes and the highest EVPI of \$809 at the same WTP.
Different characterizations of uncertainty of calibrated parameters have
implications on the expect value of reducing uncertainty on the CEA. Ignoring
inherent correlation among calibrated parameters on a PSA overestimates the
value of uncertainty.Comment: 17 pages, 6 figures, 3 table
Trajectory-oriented optimization of stochastic epidemiological models
Epidemiological models must be calibrated to ground truth for downstream
tasks such as producing forward projections or running what-if scenarios. The
meaning of calibration changes in case of a stochastic model since output from
such a model is generally described via an ensemble or a distribution. Each
member of the ensemble is usually mapped to a random number seed (explicitly or
implicitly). With the goal of finding not only the input parameter settings but
also the random seeds that are consistent with the ground truth, we propose a
class of Gaussian process (GP) surrogates along with an optimization strategy
based on Thompson sampling. This Trajectory Oriented Optimization (TOO)
approach produces actual trajectories close to the empirical observations
instead of a set of parameter settings where only the mean simulation behavior
matches with the ground truth
A deterministic small-world network created by edge iterations
Small-world networks are ubiquitous in real-life systems. Most previous
models of small-world networks are stochastic. The randomness makes it more
difficult to gain a visual understanding on how do different nodes of networks
interact with each other and is not appropriate for communication networks that
have fixed interconnections. Here we present a model that generates a
small-world network in a simple deterministic way. Our model has a discrete
exponential degree distribution. We solve the main characteristics of the
model.Comment: 9 pages, 1 figure. to appear in Physica
Farey Graphs as Models for Complex Networks
Farey sequences of irreducible fractions between 0 and 1 can be related to
graph constructions known as Farey graphs. These graphs were first introduced
by Matula and Kornerup in 1979 and further studied by Colbourn in 1982 and they
have many interesting properties: they are minimally 3-colorable, uniquely
Hamiltonian, maximally outerplanar and perfect. In this paper we introduce a
simple generation method for a Farey graph family, and we study analytically
relevant topological properties: order, size, degree distribution and
correlation, clustering, transitivity, diameter and average distance. We show
that the graphs are a good model for networks associated with some complex
systems.Comment: Definitive version published in Theoretical Computer Scienc
- …