1,983 research outputs found

    On Geometric Quantum Confinement in Grushin-type Manifolds

    Full text link
    We study the problem of so-called geometric quantum confinement in a class of two-dimensional incomplete Riemannian manifold with metric of Grushin type. We employ a constant-fibre direct integral scheme, in combination with Weyl's analysis in each fibre, thus fully characterising the regimes of presence and absence of essential self-adjointness of the associated Laplace-Beltrami operator.Comment: 16 pages, 2 figure

    Set optimization - a rather short introduction

    Full text link
    Recent developments in set optimization are surveyed and extended including various set relations as well as fundamental constructions of a convex analysis for set- and vector-valued functions, and duality for set optimization problems. Extensive sections with bibliographical comments summarize the state of the art. Applications to vector optimization and financial risk measures are discussed along with algorithmic approaches to set optimization problems

    The black hole stability problem for linear scalar perturbations

    Full text link
    We review our recent work on linear stability for scalar perturbations of Kerr spacetimes, that is to say, boundedness and decay properties for solutions of the scalar wave equation \Box_g{\psi} = 0 on Kerr exterior backgrounds. We begin with the very slowly rotating case |a| \ll M, where first boundedness and then decay has been shown in rapid developments over the last two years, following earlier progress in the Schwarzschild case a = 0. We then turn to the general subextremal range |a| < M, where we give here for the first time the essential elements of a proof of definitive decay bounds for solutions {\psi}. These developments give hope that the problem of the non-linear stability of the Kerr family of black holes might soon be addressed. This paper accompanies a talk by one of the authors (I.R.) at the 12th Marcel Grossmann Meeting, Paris, June 2009.Comment: 48 pages, 5 figures, to appear in Proceedings of the 12 Marcel Grossmann Meetin

    Four Lessons in Versatility or How Query Languages Adapt to the Web

    Get PDF
    Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”

    Banach algebras of pseudodifferential operators and their almost diagonalization

    Get PDF
    We define new symbol classes for pseudodifferntial operators and investigate their pseudodifferential calculus. The symbol classes are parametrized by commutative convolution algebras. To every solid convolution algebra over a lattice we associate a symbol class. Then every operator with such a symbol is almost diagonal with respect to special wave packets (coherent states or Gabor frames), and the rate of almost diagonalization is described precisely by the underlying convolution algebra. Furthermore, the corresponding class of pseudodifferential operators is a Banach algebra of bounded operators on L2L^2 . If a version of Wiener's lemma holds for the underlying convolution algebra, then the algebra of pseudodifferential operators is closed under inversion. The theory contains as a special case the fundamental results about Sj\"ostrand's class and yields a new proof of a theorem of Beals about the H\"ormander class of order 0.Comment: 28 page

    Linear superposition as a core theorem of quantum empiricism

    Get PDF
    Clarifying the nature of the quantum state Ψ|\Psi\rangle is at the root of the problems with insight into (counterintuitive) quantum postulates. We provide a direct-and math-axiom free-empirical derivation of this object as an element of a vector space. Establishing the linearity of this structure-quantum superposition-is based on a set-theoretic creation of ensemble formations and invokes the following three principia: (I)(\textsf{I}) quantum statics, (II)(\textsf{II}) doctrine of a number in the physical theory, and (III)(\textsf{III}) mathematization of matching the two observations with each other; quantum invariance. All of the constructs rest upon a formalization of the minimal experimental entity: observed micro-event, detector click. This is sufficient for producing the C\mathbb C-numbers, axioms of linear vector space (superposition principle), statistical mixtures of states, eigenstates and their spectra, and non-commutativity of observables. No use is required of the concept of time. As a result, the foundations of theory are liberated to a significant extent from the issues associated with physical interpretations, philosophical exegeses, and mathematical reconstruction of the entire quantum edifice.Comment: No figures. 64 pages; 68 pages(+4), overall substantial improvements; 70 pages(+2), further improvement

    PSPACE Bounds for Rank-1 Modal Logics

    Get PDF
    For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step towards a general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a shallow model property and thus are, under mild assumptions on the format of their axiomatisation, in PSPACE. This leads to a unified derivation of tight PSPACE-bounds for a number of logics including K, KD, coalition logic, graded modal logic, majority logic, and probabilistic modal logic. Our generic algorithm moreover finds tableau proofs that witness pleasant proof-theoretic properties including a weak subformula property. This generality is made possible by a coalgebraic semantics, which conveniently abstracts from the details of a given model class and thus allows covering a broad range of logics in a uniform way

    The Efficient Discovery of Interesting Closed Pattern Collections

    Get PDF
    Enumerating closed sets that are frequent in a given database is a fundamental data mining technique that is used, e.g., in the context of market basket analysis, fraud detection, or Web personalization. There are two complementing reasons for the importance of closed sets---one semantical and one algorithmic: closed sets provide a condensed basis for non-redundant collections of interesting local patterns, and they can be enumerated efficiently. For many databases, however, even the closed set collection can be way too large for further usage and correspondingly its computation time can be infeasibly long. In such cases, it is inevitable to focus on smaller collections of closed sets, and it is essential that these collections retain both: controlled semantics reflecting some notion of interestingness as well as efficient enumerability. This thesis discusses three different approaches to achieve this: constraint-based closed set extraction, pruning by quantifying the degree or strength of closedness, and controlled random generation of closed sets instead of exhaustive enumeration. For the original closed set family, efficient enumerability results from the fact that there is an inducing efficiently computable closure operator and that its fixpoints can be enumerated by an amortized polynomial number of closure computations. Perhaps surprisingly, it turns out that this connection does not generally hold for other constraint combinations, as the restricted domains induced by additional constraints can cause two things to happen: the fixpoints of the closure operator cannot be enumerated efficiently or an inducing closure operator does not even exist. This thesis gives, for the first time, a formal axiomatic characterization of constraint classes that allow to efficiently enumerate fixpoints of arbitrary closure operators as well as of constraint classes that guarantee the existence of a closure operator inducing the closed sets. As a complementary approach, the thesis generalizes the notion of closedness by quantifying its strength, i.e., the difference in supporting database records between a closed set and all its supersets. This gives rise to a measure of interestingness that is able to select long and thus particularly informative closed sets that are robust against noise and dynamic changes. Moreover, this measure is algorithmically sound because all closed sets with a minimum strength again form a closure system that can be enumerated efficiently and that directly ties into the results on constraint-based closed sets. In fact both approaches can easily be combined. In some applications, however, the resulting set of constrained closed sets is still intractably large or it is too difficult to find meaningful hard constraints at all (including values for their parameters). Therefore, the last part of this thesis presents an alternative algorithmic paradigm to the extraction of closed sets: instead of exhaustively listing a potentially exponential number of sets, randomly generate exactly the desired amount of them. By using the Markov chain Monte Carlo method, this generation can be performed according to any desired probability distribution that favors interesting patterns. This novel randomized approach complements traditional enumeration techniques (including those mentioned above): On the one hand, it is only applicable in scenarios that do not require deterministic guarantees for the output such as exploratory data analysis or global model construction. On the other hand, random closed set generation provides complete control over the number as well as the distribution of the produced sets.Das Aufzählen abgeschlossener Mengen (closed sets), die häufig in einer gegebenen Datenbank vorkommen, ist eine algorithmische Grundaufgabe im Data Mining, die z.B. in Warenkorbanalyse, Betrugserkennung oder Web-Personalisierung auftritt. Die Wichtigkeit abgeschlossener Mengen ist semantisch als auch algorithmisch begründet: Sie bilden eine nicht-redundante Basis zur Erzeugung von lokalen Mustern und können gleichzeitig effizient aufgezählt werden. Allerdings kann die Anzahl aller abgeschlossenen Mengen, und damit ihre Auflistungszeit, das Maß des effektiv handhabbaren oft deutlich übersteigen. In diesem Fall ist es unvermeidlich, kleinere Ausgabefamilien zu betrachten, und es ist essenziell, dass dabei beide o.g. Eigenschaften erhalten bleiben: eine kontrollierte Semantik im Sinne eines passenden Interessantheitsbegriffes sowie effiziente Aufzählbarkeit. Diese Arbeit stellt dazu drei Ansätze vor: das Einführen zusätzlicher Constraints, die Quantifizierung der Abgeschlossenheit und die kontrollierte zufällige Erzeugung einzelner Mengen anstelle von vollständiger Aufzählung. Die effiziente Aufzählbarkeit der ursprünglichen Familie abgeschlossener Mengen rührt daher, dass sie durch einen effizient berechenbaren Abschlussoperator erzeugt wird und dass desweiteren dessen Fixpunkte durch eine amortisiert polynomiell beschränkte Anzahl von Abschlussberechnungen aufgezählt werden können. Wie sich herausstellt ist dieser Zusammenhang im Allgemeinen nicht mehr gegeben, wenn die Funktionsdomäne durch Constraints einschränkt wird, d.h., dass die effiziente Aufzählung der Fixpunkte nicht mehr möglich ist oder ein erzeugender Abschlussoperator unter Umständen gar nicht existiert. Diese Arbeit gibt erstmalig eine axiomatische Charakterisierung von Constraint-Klassen, die die effiziente Fixpunktaufzählung von beliebigen Abschlussoperatoren erlauben, sowie von Constraint-Klassen, die die Existenz eines erzeugenden Abschlussoperators garantieren. Als ergänzenden Ansatz stellt die Dissertation eine Generalisierung bzw. Quantifizierung des Abgeschlossenheitsbegriffs vor, der auf der Differenz zwischen den Datenbankvorkommen einer Menge zu den Vorkommen all seiner Obermengen basiert. Mengen, die bezüglich dieses Begriffes stark abgeschlossen sind, weisen eine bestimmte Robustheit gegen Veränderungen der Eingabedaten auf. Desweiteren wird die gewünschte effiziente Aufzählbarkeit wiederum durch die Existenz eines effizient berechenbaren erzeugenden Abschlussoperators sichergestellt. Zusätzlich zu dieser algorithmischen Parallele zum Constraint-basierten Vorgehen, können beide Ansätze auch inhaltlich kombiniert werden. In manchen Anwendungen ist die Familie der abgeschlossenen Mengen, zu denen die beiden oben genannten Ansätze führen, allerdings immer noch zu groß bzw. ist es nicht möglich, sinnvolle harte Constraints und zugehörige Parameterwerte zu finden. Daher diskutiert diese Arbeit schließlich noch ein völlig anderes Paradigma zur Erzeugung abgeschlossener Mengen als vollständige Auflistung, nämlich die randomisierte Generierung einer Anzahl von Mengen, die exakt den gewünschten Vorgaben entspricht. Durch den Einsatz der Markov-Ketten-Monte-Carlo-Methode ist es möglich die Verteilung dieser Zufallserzeugung so zu steuern, dass das Ziehen interessanter Mengen begünstigt wird. Dieser neue Ansatz bildet eine sinnvolle Ergänzung zu herkömmlichen Techniken (einschließlich der oben genannten): Er ist zwar nur anwendbar, wenn keine deterministischen Garantien erforderlich sind, erlaubt aber andererseits eine vollständige Kontrolle über Anzahl und Verteilung der produzierten Mengen
    corecore