6,848 research outputs found
Density results for Sobolev, Besov and Triebel--Lizorkin spaces on rough sets
We investigate two density questions for Sobolev, Besov and Triebel--Lizorkin
spaces on rough sets. Our main results, stated in the simplest Sobolev space
setting, are that: (i) for an open set ,
is dense in whenever has zero Lebesgue
measure and is "thick" (in the sense of Triebel); and (ii) for a
-set (), is dense in whenever for some . For (ii), we provide
concrete examples, for any , where density fails when
and are on opposite sides of . The results (i) and (ii)
are related in a number of ways, including via their connection to the question
of whether for a
given closed set and . They also
both arise naturally in the study of boundary integral equation formulations of
acoustic wave scattering by fractal screens. We additionally provide analogous
results in the more general setting of Besov and Triebel--Lizorkin spaces.Comment: 38 pages, 6 figure
Data-driven model reduction and transfer operator approximation
In this review paper, we will present different data-driven dimension
reduction techniques for dynamical systems that are based on transfer operator
theory as well as methods to approximate transfer operators and their
eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out
similarities and differences between methods developed independently by the
dynamical systems, fluid dynamics, and molecular dynamics communities such as
time-lagged independent component analysis (TICA), dynamic mode decomposition
(DMD), and their respective generalizations. As a result, extensions and best
practices developed for one particular method can be carried over to other
related methods
Self-organization and clustering algorithms
Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed
Quantification of Perception Clusters Using R-Fuzzy Sets and Grey Analysis
This paper investigates the use of the R-fuzzy significance measure hybrid approach introduced by the authors in a previous work; used in conjunction with grey analysis to allow for further inferencing, providing a higher dimension of accuracy and understanding. As a single observation can have a multitude of different perspectives, choosing a single fuzzy value as a representative becomes problematic. The fundamental concept of an R-fuzzy set is that it allows for the collective perception of a populous, and also individualised perspectives to be encapsulated within its membership set. The introduction of the significance measure allowed for the quantification of any membership value contained within any generated R-fuzzy set. Such is the pairing of the significance measure and the R-fuzzy concept, it replicates in part, the higher order of complex uncertainty which can be garnered using a type-2 fuzzy approach, with the computational ease and objectiveness of a typical type-1 fuzzy set. This paper utilises the use of grey analysis, in particular, the use of the absolute degree of grey incidence for the inspection of the sequence generated when using the significance measure, when quantifying the degree of significance fore each contained fuzzy membership value. Using the absolute degree of grey incidence provides a means to measure the metric spaces between sequences. As the worked example will show, if the data contains perceptions from clusters of cohorts, these clusters can be compared and contrasted to allow for a more detailed understanding of the abstract concepts being modelled
Calibration by correlation using metric embedding from non-metric similarities
This paper presents a new intrinsic calibration method that allows us to calibrate a generic single-view point camera just
by waving it around. From the video sequence obtained while the camera undergoes random motion, we compute the pairwise time
correlation of the luminance signal for a subset of the pixels. We show that, if the camera undergoes a random uniform motion, then
the pairwise correlation of any pixels pair is a function of the distance between the pixel directions on the visual sphere. This leads to
formalizing calibration as a problem of metric embedding from non-metric measurements: we want to find the disposition of pixels on
the visual sphere from similarities that are an unknown function of the distances. This problem is a generalization of multidimensional
scaling (MDS) that has so far resisted a comprehensive observability analysis (can we reconstruct a metrically accurate embedding?)
and a solid generic solution (how to do so?). We show that the observability depends both on the local geometric properties (curvature)
as well as on the global topological properties (connectedness) of the target manifold. We show that, in contrast to the Euclidean case,
on the sphere we can recover the scale of the points distribution, therefore obtaining a metrically accurate solution from non-metric
measurements. We describe an algorithm that is robust across manifolds and can recover a metrically accurate solution when the metric
information is observable. We demonstrate the performance of the algorithm for several cameras (pin-hole, fish-eye, omnidirectional),
and we obtain results comparable to calibration using classical methods. Additional synthetic benchmarks show that the algorithm
performs as theoretically predicted for all corner cases of the observability analysis
Discrete Geometric Structures in Homogenization and Inverse Homogenization with application to EIT
We introduce a new geometric approach for the homogenization and inverse
homogenization of the divergence form elliptic operator with rough conductivity
coefficients in dimension two. We show that conductivity
coefficients are in one-to-one correspondence with divergence-free matrices and
convex functions over the domain . Although homogenization is a
non-linear and non-injective operator when applied directly to conductivity
coefficients, homogenization becomes a linear interpolation operator over
triangulations of when re-expressed using convex functions, and is a
volume averaging operator when re-expressed with divergence-free matrices.
Using optimal weighted Delaunay triangulations for linearly interpolating
convex functions, we obtain an optimally robust homogenization algorithm for
arbitrary rough coefficients. Next, we consider inverse homogenization and show
how to decompose it into a linear ill-posed problem and a well-posed non-linear
problem. We apply this new geometric approach to Electrical Impedance
Tomography (EIT). It is known that the EIT problem admits at most one isotropic
solution. If an isotropic solution exists, we show how to compute it from any
conductivity having the same boundary Dirichlet-to-Neumann map. It is known
that the EIT problem admits a unique (stable with respect to -convergence)
solution in the space of divergence-free matrices. As such we suggest that the
space of convex functions is the natural space in which to parameterize
solutions of the EIT problem
- âŠ