333,855 research outputs found
Comparison Of Approximation-assisted Component Modeling Methods For Steady State Vapor Compression System Simulation
An accurate, fast and robust heat exchanger model is critical for reliable steady state simulation of vapor compression systems. In such simulations, the heat exchanger models are often the most time consuming components and can be plagued by severe non-linearities especially if they are black-box or third-party provided. This paper investigates and compares different approaches for heat exchanger performance approximation, with the distributed parameter approach being the baseline. The methods are: Gaussian kernel based on Kriging, a multi-zone approach, and polynomial regression. Generally, distributed parameter models have the highest level of accuracy but can be time-consuming. Kriging metamodels have relatively low computational cost but has little underlying physics. Multi-zone models have the lowest computation cost due to the lump treatment of heat transfer and pressure drop; however, they also tend to have the least accuracy. To better understand the potential and limitations of those heat exchanger modeling methods, the pressure drop and capacity of the same heat exchangers predicted by the three approximation modeling methods are compared against the baseline approach under the same operating conditions. The comparison between the Kriging metamodel and the distributed parameter model shows that 95.2% out of 10,000 test points have capacity deviation less than 20%, and that 93.9% have pressure drop deviation less than 10%. Large capacity deviations occur at those operating conditions with low inlet pressures, while large pressure drop deviations occur at those with high inlet pressures. The multi-zone model presents relatively larger deviations in terms of both pressure drop and capacity when compared with the distributed parameter model. Thus, regression based techniques are applied to further improve the accuracy of the multi-zone model. The heat exchanger modeling approaches are incorporated to a vapor compression cycle model. Lastly, some ideas on how such an approach can be used to approximate a set of components models, not just heat exchangers, are discussed
Fast Damage Recovery in Robotics with the T-Resilience Algorithm
Damage recovery is critical for autonomous robots that need to operate for a
long time without assistance. Most current methods are complex and costly
because they require anticipating each potential damage in order to have a
contingency plan ready. As an alternative, we introduce the T-resilience
algorithm, a new algorithm that allows robots to quickly and autonomously
discover compensatory behaviors in unanticipated situations. This algorithm
equips the robot with a self-model and discovers new behaviors by learning to
avoid those that perform differently in the self-model and in reality. Our
algorithm thus does not identify the damaged parts but it implicitly searches
for efficient behaviors that do not use them. We evaluate the T-Resilience
algorithm on a hexapod robot that needs to adapt to leg removal, broken legs
and motor failures; we compare it to stochastic local search, policy gradient
and the self-modeling algorithm proposed by Bongard et al. The behavior of the
robot is assessed on-board thanks to a RGB-D sensor and a SLAM algorithm. Using
only 25 tests on the robot and an overall running time of 20 minutes,
T-Resilience consistently leads to substantially better results than the other
approaches
Recommended from our members
From lumped to distributed via semi-distributed: Calibration strategies for semi-distributed hydrologic models
Modeling the effect of spatial variability of precipitation and basin characteristics on streamflow requires the use of distributed or semi-distributed hydrologic models. This paper addresses a DMIP 2 study that focuses on the advantages of using a semi-distributed modeling structure. We first present a revised semi-distributed structure of the NWS SACramento Soil Moisture Accounting (SAC-SMA) model that separates the routing of fast and slow response runoff components, and thus explicitly accounts for the differences between the two components. We then test four different calibration strategies that take advantage of the strengths of existing optimization algorithms (SCE-UA) and schemes (MACS). These strategies include: (1) lumped parameters and basin averaged precipitation, (2) semi-lumped parameters and distributed precipitation forcing, (3) semi-distributed parameters and distributed precipitation forcing and (4) lumped parameters and basin averaged precipitation, modified using a priori parameters of the SAC-SMA model. Finally, we explore the value of using discharge observations at interior points in model calibration by assessing gains/losses in hydrograph simulations at the basin outlet. Our investigation focuses on two key DMIP 2 science questions. Specifically, we investigate (a) the ability of the semi-distributed model structure to improve stream flow simulations at the basin outlet and (b) to provide reasonably good simulations at interior points.The semi-distributed model is calibrated for the Illinois River Basin at Siloam Springs, Arkansas using streamflow observations at the basin outlet only. The results indicate that lumped to distributed calibration strategies (1 and 4) both improve simulation at the outlet and provide meaningful streamflow predictions at interior points. In addition, the results of the complementary study, which uses interior points during the model calibration, suggest that model performance at the outlet can be further improved by using a semi-distributed structure calibrated at both interior points and the outlet, even when only a few years of historical record are available. © 2009 Elsevier B.V
Local-Aggregate Modeling for Big-Data via Distributed Optimization: Applications to Neuroimaging
Technological advances have led to a proliferation of structured big data
that have matrix-valued covariates. We are specifically motivated to build
predictive models for multi-subject neuroimaging data based on each subject's
brain imaging scans. This is an ultra-high-dimensional problem that consists of
a matrix of covariates (brain locations by time points) for each subject; few
methods currently exist to fit supervised models directly to this tensor data.
We propose a novel modeling and algorithmic strategy to apply generalized
linear models (GLMs) to this massive tensor data in which one set of variables
is associated with locations. Our method begins by fitting GLMs to each
location separately, and then builds an ensemble by blending information across
locations through regularization with what we term an aggregating penalty. Our
so called, Local-Aggregate Model, can be fit in a completely distributed manner
over the locations using an Alternating Direction Method of Multipliers (ADMM)
strategy, and thus greatly reduces the computational burden. Furthermore, we
propose to select the appropriate model through a novel sequence of faster
algorithmic solutions that is similar to regularization paths. We will
demonstrate both the computational and predictive modeling advantages of our
methods via simulations and an EEG classification problem.Comment: 41 pages, 5 figures and 3 table
Tensorized Self-Attention: Efficiently Modeling Pairwise and Global Dependencies Together
Neural networks equipped with self-attention have parallelizable computation,
light-weight structure, and the ability to capture both long-range and local
dependencies. Further, their expressive power and performance can be boosted by
using a vector to measure pairwise dependency, but this requires to expand the
alignment matrix to a tensor, which results in memory and computation
bottlenecks. In this paper, we propose a novel attention mechanism called
"Multi-mask Tensorized Self-Attention" (MTSA), which is as fast and as
memory-efficient as a CNN, but significantly outperforms previous
CNN-/RNN-/attention-based models. MTSA 1) captures both pairwise (token2token)
and global (source2token) dependencies by a novel compatibility function
composed of dot-product and additive attentions, 2) uses a tensor to represent
the feature-wise alignment scores for better expressive power but only requires
parallelizable matrix multiplications, and 3) combines multi-head with
multi-dimensional attentions, and applies a distinct positional mask to each
head (subspace), so the memory and computation can be distributed to multiple
heads, each with sequential information encoded independently. The experiments
show that a CNN/RNN-free model based on MTSA achieves state-of-the-art or
competitive performance on nine NLP benchmarks with compelling memory- and
time-efficiency
- …