8,153 research outputs found
Institute for Computational Mechanics in Propulsion (ICOMP)
The Institute for Computational Mechanics in Propulsion (ICOMP) is a combined activity of Case Western Reserve University, Ohio Aerospace Institute (OAI) and NASA Lewis. The purpose of ICOMP is to develop techniques to improve problem solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1991 are described
Multiclass Data Segmentation using Diffuse Interface Methods on Graphs
We present two graph-based algorithms for multiclass segmentation of
high-dimensional data. The algorithms use a diffuse interface model based on
the Ginzburg-Landau functional, related to total variation compressed sensing
and image processing. A multiclass extension is introduced using the Gibbs
simplex, with the functional's double-well potential modified to handle the
multiclass case. The first algorithm minimizes the functional using a convex
splitting numerical scheme. The second algorithm is a uses a graph adaptation
of the classical numerical Merriman-Bence-Osher (MBO) scheme, which alternates
between diffusion and thresholding. We demonstrate the performance of both
algorithms experimentally on synthetic data, grayscale and color images, and
several benchmark data sets such as MNIST, COIL and WebKB. We also make use of
fast numerical solvers for finding the eigenvectors and eigenvalues of the
graph Laplacian, and take advantage of the sparsity of the matrix. Experiments
indicate that the results are competitive with or better than the current
state-of-the-art multiclass segmentation algorithms.Comment: 14 page
Distributed Adaptive Learning of Graph Signals
The aim of this paper is to propose distributed strategies for adaptive
learning of signals defined over graphs. Assuming the graph signal to be
bandlimited, the method enables distributed reconstruction, with guaranteed
performance in terms of mean-square error, and tracking from a limited number
of sampled observations taken from a subset of vertices. A detailed mean square
analysis is carried out and illustrates the role played by the sampling
strategy on the performance of the proposed method. Finally, some useful
strategies for distributed selection of the sampling set are provided. Several
numerical results validate our theoretical findings, and illustrate the
performance of the proposed method for distributed adaptive learning of signals
defined over graphs.Comment: To appear in IEEE Transactions on Signal Processing, 201
The pseudo-self-similar traffic model: application and validation
Since the early 1990¿s, a variety of studies has shown that network traffic, both for local- and wide-area networks, has self-similar properties. This led to new approaches in network traffic modelling because most traditional traffic approaches result in the underestimation of performance measures of interest. Instead of developing completely new traffic models, a number of researchers have proposed to adapt traditional traffic modelling approaches to incorporate aspects of self-similarity. The motivation for doing so is the hope to be able to reuse techniques and tools that have been developed in the past and with which experience has been gained. One such approach for a traffic model that incorporates aspects of self-similarity is the so-called pseudo self-similar traffic model. This model is appealing, as it is easy to understand and easily embedded in Markovian performance evaluation studies. In applying this model in a number of cases, we have perceived various problems which we initially thought were particular to these specific cases. However, we recently have been able to show that these problems are fundamental to the pseudo self-similar traffic model. In this paper we review the pseudo self-similar traffic model and discuss its fundamental shortcomings. As far as we know, this is the first paper that discusses these shortcomings formally. We also report on ongoing work to overcome some of these problems
Random field sampling for a simplified model of melt-blowing considering turbulent velocity fluctuations
In melt-blowing very thin liquid fiber jets are spun due to high-velocity air
streams. In literature there is a clear, unsolved discrepancy between the
measured and computed jet attenuation. In this paper we will verify numerically
that the turbulent velocity fluctuations causing a random aerodynamic drag on
the fiber jets -- that has been neglected so far -- are the crucial effect to
close this gap. For this purpose, we model the velocity fluctuations as vector
Gaussian random fields on top of a k-epsilon turbulence description and develop
an efficient sampling procedure. Taking advantage of the special covariance
structure the effort of the sampling is linear in the discretization and makes
the realization possible
Space-time adaptive resolution for reactive flows
Multi-scale systems evolve over a wide range of temporal and spatial scales. The extent of time scales makes both theoretical and numerical analysis difficult, mostly because the time scales of interest are typically much slower than the fastest scales occurring in the system. Systems with such characteristics are usually classified as being stiff. An adaptive mesh refinement method based on the wavelet transform and the G-Scheme framework are used to achieve spatial and temporal adaptive model reduction, respectively, of physical problems described by PDEs. The combination of the methods is proposed to solve PDEs describing reaction-diffusion systems with the minimal number of degrees of freedom, for prescribed accuracies in space and time. Different reaction-diffusion systems are studied with the aim to test the performance and the capability of the combined scheme to generate accurate solutions with respect to reference ones. Several strategies are implemented to improve the performance of the scheme, with minimal loss of accuracy
- …