6,848 research outputs found

    Density results for Sobolev, Besov and Triebel--Lizorkin spaces on rough sets

    Get PDF
    We investigate two density questions for Sobolev, Besov and Triebel--Lizorkin spaces on rough sets. Our main results, stated in the simplest Sobolev space setting, are that: (i) for an open set Ω⊂Rn\Omega\subset\mathbb R^n, D(Ω)\mathcal{D}(\Omega) is dense in {u∈Hs(Rn):supp u⊂Ω‟}\{u\in H^s(\mathbb R^n):{\rm supp}\, u\subset \overline{\Omega}\} whenever ∂Ω\partial\Omega has zero Lebesgue measure and Ω\Omega is "thick" (in the sense of Triebel); and (ii) for a dd-set Γ⊂Rn\Gamma\subset\mathbb R^n (0<d<n0<d<n), {u∈Hs1(Rn):supp u⊂Γ}\{u\in H^{s_1}(\mathbb R^n):{\rm supp}\, u\subset \Gamma\} is dense in {u∈Hs2(Rn):supp u⊂Γ}\{u\in H^{s_2}(\mathbb R^n):{\rm supp}\, u\subset \Gamma\} whenever −n−d2−m−1<s2≀s1<−n−d2−m-\frac{n-d}{2}-m-1<s_{2}\leq s_{1}<-\frac{n-d}{2}-m for some m∈N0m\in\mathbb N_0. For (ii), we provide concrete examples, for any m∈N0m\in\mathbb N_0, where density fails when s1s_1 and s2s_2 are on opposite sides of −n−d2−m-\frac{n-d}{2}-m. The results (i) and (ii) are related in a number of ways, including via their connection to the question of whether {u∈Hs(Rn):supp u⊂Γ}={0}\{u\in H^s(\mathbb R^n):{\rm supp}\, u\subset \Gamma\}=\{0\} for a given closed set Γ⊂Rn\Gamma\subset\mathbb R^n and s∈Rs\in \mathbb R. They also both arise naturally in the study of boundary integral equation formulations of acoustic wave scattering by fractal screens. We additionally provide analogous results in the more general setting of Besov and Triebel--Lizorkin spaces.Comment: 38 pages, 6 figure

    Data Reduction with Rough Sets

    Get PDF

    Data-driven model reduction and transfer operator approximation

    Get PDF
    In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis (TICA), dynamic mode decomposition (DMD), and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods

    Self-organization and clustering algorithms

    Get PDF
    Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed

    Quantification of Perception Clusters Using R-Fuzzy Sets and Grey Analysis

    Get PDF
    This paper investigates the use of the R-fuzzy significance measure hybrid approach introduced by the authors in a previous work; used in conjunction with grey analysis to allow for further inferencing, providing a higher dimension of accuracy and understanding. As a single observation can have a multitude of different perspectives, choosing a single fuzzy value as a representative becomes problematic. The fundamental concept of an R-fuzzy set is that it allows for the collective perception of a populous, and also individualised perspectives to be encapsulated within its membership set. The introduction of the significance measure allowed for the quantification of any membership value contained within any generated R-fuzzy set. Such is the pairing of the significance measure and the R-fuzzy concept, it replicates in part, the higher order of complex uncertainty which can be garnered using a type-2 fuzzy approach, with the computational ease and objectiveness of a typical type-1 fuzzy set. This paper utilises the use of grey analysis, in particular, the use of the absolute degree of grey incidence for the inspection of the sequence generated when using the significance measure, when quantifying the degree of significance fore each contained fuzzy membership value. Using the absolute degree of grey incidence provides a means to measure the metric spaces between sequences. As the worked example will show, if the data contains perceptions from clusters of cohorts, these clusters can be compared and contrasted to allow for a more detailed understanding of the abstract concepts being modelled

    Calibration by correlation using metric embedding from non-metric similarities

    Get PDF
    This paper presents a new intrinsic calibration method that allows us to calibrate a generic single-view point camera just by waving it around. From the video sequence obtained while the camera undergoes random motion, we compute the pairwise time correlation of the luminance signal for a subset of the pixels. We show that, if the camera undergoes a random uniform motion, then the pairwise correlation of any pixels pair is a function of the distance between the pixel directions on the visual sphere. This leads to formalizing calibration as a problem of metric embedding from non-metric measurements: we want to find the disposition of pixels on the visual sphere from similarities that are an unknown function of the distances. This problem is a generalization of multidimensional scaling (MDS) that has so far resisted a comprehensive observability analysis (can we reconstruct a metrically accurate embedding?) and a solid generic solution (how to do so?). We show that the observability depends both on the local geometric properties (curvature) as well as on the global topological properties (connectedness) of the target manifold. We show that, in contrast to the Euclidean case, on the sphere we can recover the scale of the points distribution, therefore obtaining a metrically accurate solution from non-metric measurements. We describe an algorithm that is robust across manifolds and can recover a metrically accurate solution when the metric information is observable. We demonstrate the performance of the algorithm for several cameras (pin-hole, fish-eye, omnidirectional), and we obtain results comparable to calibration using classical methods. Additional synthetic benchmarks show that the algorithm performs as theoretically predicted for all corner cases of the observability analysis

    Discrete Geometric Structures in Homogenization and Inverse Homogenization with application to EIT

    Get PDF
    We introduce a new geometric approach for the homogenization and inverse homogenization of the divergence form elliptic operator with rough conductivity coefficients σ(x)\sigma(x) in dimension two. We show that conductivity coefficients are in one-to-one correspondence with divergence-free matrices and convex functions s(x)s(x) over the domain Ω\Omega. Although homogenization is a non-linear and non-injective operator when applied directly to conductivity coefficients, homogenization becomes a linear interpolation operator over triangulations of Ω\Omega when re-expressed using convex functions, and is a volume averaging operator when re-expressed with divergence-free matrices. Using optimal weighted Delaunay triangulations for linearly interpolating convex functions, we obtain an optimally robust homogenization algorithm for arbitrary rough coefficients. Next, we consider inverse homogenization and show how to decompose it into a linear ill-posed problem and a well-posed non-linear problem. We apply this new geometric approach to Electrical Impedance Tomography (EIT). It is known that the EIT problem admits at most one isotropic solution. If an isotropic solution exists, we show how to compute it from any conductivity having the same boundary Dirichlet-to-Neumann map. It is known that the EIT problem admits a unique (stable with respect to GG-convergence) solution in the space of divergence-free matrices. As such we suggest that the space of convex functions is the natural space in which to parameterize solutions of the EIT problem
    • 

    corecore