13,397 research outputs found

    Evaluation of aerothermal modeling computer programs

    Get PDF
    Various computer programs based upon the SIMPLE or SIMPLER algorithm were studied and compared for numerical accuracy, efficiency, and grid dependency. Four two-dimensional and one three-dimensional code originally developed by a number of research groups were considered. In general, the accuracy and computational efficieny of these TEACH type programs were improved by modifying the differencing schemes and their solvers. A brief description of each program is given. Error reduction, spline flux and second upwind differencing programs are covered

    Network Inference via the Time-Varying Graphical Lasso

    Full text link
    Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability

    Scientific basis for safely shutting in the Macondo Well after the April 20, 2010 Deepwater Horizon blowout

    Get PDF
    As part of the government response to the Deepwater Horizon blowout, a Well Integrity Team evaluated the geologic hazards of shutting in the Macondo Well at the seafloor and determined the conditions under which it could safely be undertaken. Of particular concern was the possibility that, under the anticipated high shut-in pressures, oil could leak out of the well casing below the seafloor. Such a leak could lead to new geologic pathways for hydrocarbon release to the Gulf of Mexico. Evaluating this hazard required analyses of 2D and 3D seismic surveys, seafloor bathymetry, sediment properties, geophysical well logs, and drilling data to assess the geological, hydrological, and geomechanical conditions around the Macondo Well. After the well was successfully capped and shut in on July 15, 2010, a variety of monitoring activities were used to assess subsurface well integrity. These activities included acquisition of wellhead pressure data, marine multichannel seismic pro- files, seafloor and water-column sonar surveys, and wellhead visual/acoustic monitoring. These data showed that the Macondo Well was not leaking after shut in, and therefore, it could remain safely shut until reservoir pressures were suppressed (killed) with heavy drilling mud and the well was sealed with cement

    Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data

    Full text link
    Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios.Comment: This revised version fixes two small typos in the published versio

    The activation energy for GaAs/AlGaAs interdiffusion

    Get PDF
    Copyright 1997 American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics. This article appeared in Journal of Applied Physics 82, 4842 (1997) and may be found at

    Polarimetry and photometry of the peculiar main-belt object 7968 = 133P/Elst-Pizarro

    Full text link
    133P/Elst-Pizarro is an object that has been described as either an active asteroid or a cometary object in the main asteroid belt. Here we present a photometric and polarimetric study of this object in an attempt to infer additional information about its origin. With the FORS1 instrument of the ESO VLT, we have performed during the 2007 apparition of 133P/Elst-Pizarro quasi-simultaneous photometry and polarimetry of its nucleus at nine epochs in the phase angle range 0 - 20 deg. For each observing epoch, we also combined all available frames to obtain a deep image of the object, to seek signatures of weak cometary activity. Polarimetric data were analysed by means of a novel physical interference modelling. The object brightness was found to be highly variable over timescales <1h, a result fully consistent with previous studies. Using the albedo-polarization relationships for asteroids and our photometric results, we found for our target an albedo of about 0.06-0.07 and a mean radius of about 1.6 km. Throughout the observing epochs, our deep imaging of the comet detects a tail and an anti-tail. Their temporal variations are consistent with an activity profile starting around mid May 2007 of minimum duration of four months. Our images show marginal evidence of a coma around the nucleus. The overall light scattering behaviour (photometry and polarimetry) resembles most closely that of F-type asteroids.Comment: Accepted by Astronomy and Astrophysic

    The quantum dynamic capacity formula of a quantum channel

    Get PDF
    The dynamic capacity theorem characterizes the reliable communication rates of a quantum channel when combined with the noiseless resources of classical communication, quantum communication, and entanglement. In prior work, we proved the converse part of this theorem by making contact with many previous results in the quantum Shannon theory literature. In this work, we prove the theorem with an "ab initio" approach, using only the most basic tools in the quantum information theorist's toolkit: the Alicki-Fannes' inequality, the chain rule for quantum mutual information, elementary properties of quantum entropy, and the quantum data processing inequality. The result is a simplified proof of the theorem that should be more accessible to those unfamiliar with the quantum Shannon theory literature. We also demonstrate that the "quantum dynamic capacity formula" characterizes the Pareto optimal trade-off surface for the full dynamic capacity region. Additivity of this formula simplifies the computation of the trade-off surface, and we prove that its additivity holds for the quantum Hadamard channels and the quantum erasure channel. We then determine exact expressions for and plot the dynamic capacity region of the quantum dephasing channel, an example from the Hadamard class, and the quantum erasure channel.Comment: 24 pages, 3 figures; v2 has improved structure and minor corrections; v3 has correction regarding the optimizatio

    Evaporation and combustion of LOX under supercritical and subcritical conditions

    Get PDF
    The objective is to study the evaporation and combustion of LOX under supercritical and subcritical conditions both experimentally and theoretically. In the evaporation studies, evaporation rate and surface temperature were measured when LOX vaporizing in helium environments at pressures ranging from 5 to 68 atm. A Varian 3700 gas chromatograph was employed to measure the oxygen concentration above the LOX surface. For the combustion tests, high-magnification video photography was used to record direct images of the flame shape of a LOX/H2/He laminar diffusion flame. The gas composition in the post-flame region is also being measured with the gas sampling and chromatography analysis. These data are being used to validate the theoretical model. A comprehensive theoretical model with the consideration of the solubility of ambient gases as well as variable thermophysical properties was formulated and solved numerically to study the gasification and burning of LOX at elevated pressures. The calculated flame shape agreed reasonably well with the edge of the observed luminous flame surface. The effect of gravity on the flame structure of laminar diffusion flames was found to be significant. In addition, the predicted results using the flame-sheet model were compared with those based upon full equilibrium calculations (which considered the formation of intermediate species) at supercritical pressures. Except at the flame front where temperature exceeded 2,800 K, the flame-sheet and equilibrium solutions in terms of temperature distributions were in very close agreement. The temperature deviation in the neighborhood of the flame front is caused by the effect of high-temperature dissociation
    • …
    corecore