127 research outputs found

    Enumerating the Digitally Convex Sets of Powers of Cycles and Cartesian Products of Paths and Complete Graphs

    Full text link
    Given a finite set VV, a convexity C\mathscr{C}, is a collection of subsets of VV that contains both the empty set and the set VV and is closed under intersections. The elements of C\mathscr{C} are called convex sets. The digital convexity, originally proposed as a tool for processing digital images, is defined as follows: a subset S⊆V(G)S\subseteq V(G) is digitally convex if, for every v∈V(G)v\in V(G), we have N[v]⊆N[S]N[v]\subseteq N[S] implies v∈Sv\in S. The number of cyclic binary strings with blocks of length at least kk is expressed as a linear recurrence relation for k≄2k\geq 2. A bijection is established between these cyclic binary strings and the digitally convex sets of the (k−1)th(k-1)^{th} power of a cycle. A closed formula for the number of digitally convex sets of the Cartesian product of two complete graphs is derived. A bijection is established between the digitally convex sets of the Cartesian product of two paths, Pn□PmP_n \square P_m, and certain types of n×mn \times m binary arrays.Comment: 16 pages, 3 figures, 1 tabl

    The Efficient Discovery of Interesting Closed Pattern Collections

    Get PDF
    Enumerating closed sets that are frequent in a given database is a fundamental data mining technique that is used, e.g., in the context of market basket analysis, fraud detection, or Web personalization. There are two complementing reasons for the importance of closed sets---one semantical and one algorithmic: closed sets provide a condensed basis for non-redundant collections of interesting local patterns, and they can be enumerated efficiently. For many databases, however, even the closed set collection can be way too large for further usage and correspondingly its computation time can be infeasibly long. In such cases, it is inevitable to focus on smaller collections of closed sets, and it is essential that these collections retain both: controlled semantics reflecting some notion of interestingness as well as efficient enumerability. This thesis discusses three different approaches to achieve this: constraint-based closed set extraction, pruning by quantifying the degree or strength of closedness, and controlled random generation of closed sets instead of exhaustive enumeration. For the original closed set family, efficient enumerability results from the fact that there is an inducing efficiently computable closure operator and that its fixpoints can be enumerated by an amortized polynomial number of closure computations. Perhaps surprisingly, it turns out that this connection does not generally hold for other constraint combinations, as the restricted domains induced by additional constraints can cause two things to happen: the fixpoints of the closure operator cannot be enumerated efficiently or an inducing closure operator does not even exist. This thesis gives, for the first time, a formal axiomatic characterization of constraint classes that allow to efficiently enumerate fixpoints of arbitrary closure operators as well as of constraint classes that guarantee the existence of a closure operator inducing the closed sets. As a complementary approach, the thesis generalizes the notion of closedness by quantifying its strength, i.e., the difference in supporting database records between a closed set and all its supersets. This gives rise to a measure of interestingness that is able to select long and thus particularly informative closed sets that are robust against noise and dynamic changes. Moreover, this measure is algorithmically sound because all closed sets with a minimum strength again form a closure system that can be enumerated efficiently and that directly ties into the results on constraint-based closed sets. In fact both approaches can easily be combined. In some applications, however, the resulting set of constrained closed sets is still intractably large or it is too difficult to find meaningful hard constraints at all (including values for their parameters). Therefore, the last part of this thesis presents an alternative algorithmic paradigm to the extraction of closed sets: instead of exhaustively listing a potentially exponential number of sets, randomly generate exactly the desired amount of them. By using the Markov chain Monte Carlo method, this generation can be performed according to any desired probability distribution that favors interesting patterns. This novel randomized approach complements traditional enumeration techniques (including those mentioned above): On the one hand, it is only applicable in scenarios that do not require deterministic guarantees for the output such as exploratory data analysis or global model construction. On the other hand, random closed set generation provides complete control over the number as well as the distribution of the produced sets.Das AufzĂ€hlen abgeschlossener Mengen (closed sets), die hĂ€ufig in einer gegebenen Datenbank vorkommen, ist eine algorithmische Grundaufgabe im Data Mining, die z.B. in Warenkorbanalyse, Betrugserkennung oder Web-Personalisierung auftritt. Die Wichtigkeit abgeschlossener Mengen ist semantisch als auch algorithmisch begrĂŒndet: Sie bilden eine nicht-redundante Basis zur Erzeugung von lokalen Mustern und können gleichzeitig effizient aufgezĂ€hlt werden. Allerdings kann die Anzahl aller abgeschlossenen Mengen, und damit ihre Auflistungszeit, das Maß des effektiv handhabbaren oft deutlich ĂŒbersteigen. In diesem Fall ist es unvermeidlich, kleinere Ausgabefamilien zu betrachten, und es ist essenziell, dass dabei beide o.g. Eigenschaften erhalten bleiben: eine kontrollierte Semantik im Sinne eines passenden Interessantheitsbegriffes sowie effiziente AufzĂ€hlbarkeit. Diese Arbeit stellt dazu drei AnsĂ€tze vor: das EinfĂŒhren zusĂ€tzlicher Constraints, die Quantifizierung der Abgeschlossenheit und die kontrollierte zufĂ€llige Erzeugung einzelner Mengen anstelle von vollstĂ€ndiger AufzĂ€hlung. Die effiziente AufzĂ€hlbarkeit der ursprĂŒnglichen Familie abgeschlossener Mengen rĂŒhrt daher, dass sie durch einen effizient berechenbaren Abschlussoperator erzeugt wird und dass desweiteren dessen Fixpunkte durch eine amortisiert polynomiell beschrĂ€nkte Anzahl von Abschlussberechnungen aufgezĂ€hlt werden können. Wie sich herausstellt ist dieser Zusammenhang im Allgemeinen nicht mehr gegeben, wenn die FunktionsdomĂ€ne durch Constraints einschrĂ€nkt wird, d.h., dass die effiziente AufzĂ€hlung der Fixpunkte nicht mehr möglich ist oder ein erzeugender Abschlussoperator unter UmstĂ€nden gar nicht existiert. Diese Arbeit gibt erstmalig eine axiomatische Charakterisierung von Constraint-Klassen, die die effiziente FixpunktaufzĂ€hlung von beliebigen Abschlussoperatoren erlauben, sowie von Constraint-Klassen, die die Existenz eines erzeugenden Abschlussoperators garantieren. Als ergĂ€nzenden Ansatz stellt die Dissertation eine Generalisierung bzw. Quantifizierung des Abgeschlossenheitsbegriffs vor, der auf der Differenz zwischen den Datenbankvorkommen einer Menge zu den Vorkommen all seiner Obermengen basiert. Mengen, die bezĂŒglich dieses Begriffes stark abgeschlossen sind, weisen eine bestimmte Robustheit gegen VerĂ€nderungen der Eingabedaten auf. Desweiteren wird die gewĂŒnschte effiziente AufzĂ€hlbarkeit wiederum durch die Existenz eines effizient berechenbaren erzeugenden Abschlussoperators sichergestellt. ZusĂ€tzlich zu dieser algorithmischen Parallele zum Constraint-basierten Vorgehen, können beide AnsĂ€tze auch inhaltlich kombiniert werden. In manchen Anwendungen ist die Familie der abgeschlossenen Mengen, zu denen die beiden oben genannten AnsĂ€tze fĂŒhren, allerdings immer noch zu groß bzw. ist es nicht möglich, sinnvolle harte Constraints und zugehörige Parameterwerte zu finden. Daher diskutiert diese Arbeit schließlich noch ein völlig anderes Paradigma zur Erzeugung abgeschlossener Mengen als vollstĂ€ndige Auflistung, nĂ€mlich die randomisierte Generierung einer Anzahl von Mengen, die exakt den gewĂŒnschten Vorgaben entspricht. Durch den Einsatz der Markov-Ketten-Monte-Carlo-Methode ist es möglich die Verteilung dieser Zufallserzeugung so zu steuern, dass das Ziehen interessanter Mengen begĂŒnstigt wird. Dieser neue Ansatz bildet eine sinnvolle ErgĂ€nzung zu herkömmlichen Techniken (einschließlich der oben genannten): Er ist zwar nur anwendbar, wenn keine deterministischen Garantien erforderlich sind, erlaubt aber andererseits eine vollstĂ€ndige Kontrolle ĂŒber Anzahl und Verteilung der produzierten Mengen

    Q(sqrt(-3))-Integral Points on a Mordell Curve

    Get PDF
    We use an extension of quadratic Chabauty to number fields,recently developed by the author with Balakrishnan, Besser and M ̈uller,combined with a sieving technique, to determine the integral points overQ(√−3) on the Mordell curve y2 = x3 − 4

    Atomic Norm decomposition for sparse model reconstruction applied to positioning and wireless communications

    Get PDF
    This thesis explores the recovery of sparse signals, arising in the wireless communication and radar system fields, via atomic norm decomposition. Particularly, we focus on compressed sensing gridless methodologies, which avoid the always existing error due to the discretization of a continuous space in on-grid methods. We define the sparse signal by means of a linear combination of so called atoms defined in a continuous parametrical atom set with infinite cardinality. Those atoms are fully characterized by a multi-dimensional parameter containing very relevant information about the application scenario itself. Also, the number of composite atoms is much lower than the dimension of the problem, which yields sparsity. We address a gridless optimization solution enforcing sparsity via atomic norm minimization to extract the parameters that characterize the atom from an observed measurement of the model, which enables model recovery. We also study a machine learning approach to estimate the number of composite atoms that construct the model, given that in certain scenarios this number is unknown. The applications studied in the thesis lay on the field of wireless communications, particularly on MIMO mmWave channels, which due to their natural properties can be modeled as sparse. We apply the proposed methods to positioning in automotive pulse radar working in the mmWave range, where we extract relevant information such as angle of arrival (AoA), distance and velocity from the received echoes of objects or targets. Next we study the design of a hybrid precoder for mmWave channels which allows the reduction of hardware cost in the system by minimizing as much as possible the number of required RF chains. Last, we explore full channel estimation by finding the angular parameters that model the channel. For all the applications we provide a numerical analysis where we compare our proposed method with state-of-the-art techniques, showing that our proposal outperforms the alternative methods.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Juan José Murillo Fuentes.- Secretario: Pablo Martínez Olmos.- Vocal: David Luengo Garcí

    Third ERTS Symposium: Abstracts

    Get PDF
    Abstracts are provided for the 112 papers presented at the Earth Resources Program Symposium held at Washington, D.C., 10-14 December, 1973

    Algorithms for Geometric Optimization and Enrichment in Industrialized Building Construction

    Get PDF
    The burgeoning use of industrialized building construction, coupled with advances in digital technologies, is unlocking new opportunities to improve the status quo of construction projects being over-budget, delayed and having undesirable quality. Yet there are still several objective barriers that need to be overcome in order to fully realize the full potential of these innovations. Analysis of literature and examples from industry reveal the following notable barriers: (1) geometric optimization methods need to be developed for the stricter dimensional requirements in industrialized construction, (2) methods are needed to preserve model semantics during the process of generating an updated as-built model, (3) semantic enrichment methods are required for the end-of-life stage of industrialized buildings, and (4) there is a need to develop pragmatic approaches for algorithms to ensure they achieve required computational efficiency. The common thread across these examples is the need for developing algorithms to optimize and enrich geometric models. To date, a comprehensive approach paired with pragmatic solutions remains elusive. This research fills this gap by presenting a new approach for algorithm development along with pragmatic implementations for the industrialized building construction sector. Computational algorithms are effective for driving the design, analysis, and optimization of geometric models. As such, this thesis develops new computational algorithms for design, fabrication and assembly, onsite construction, and end-of-life stages of industrialized buildings. A common theme throughout this work is the development and comparison of varied algorithmic approaches (i.e., exact vs. approximate solutions) to see which is optimal for a given process. This is implemented in the following ways. First, a probabilistic method is used to simulate the accumulation of dimensional tolerances in order to optimize geometric models during design. Second, a series of exact and approximate algorithms are used to optimize the topology of 2D panelized assemblies to minimize material use during fabrication and assembly. Third, a new approach to automatically update geometric models is developed whereby initial model semantics are preserved during the process of generating an as-built model. Finally, a series of algorithms are developed to semantically enrich geometric models to enable industrialized buildings to be disassembled and reused. The developments made in this research form a rational and pragmatic approach to addressing the existing challenges faced in industrialized building construction. Such developments are shown not only to be effective in improving the status quo in the industry (i.e., improving cost, reducing project duration, and improving quality), but also for facilitating continuous innovation in construction. By way of assessing the potential impact of this work, the proposed algorithms can reduce rework risk during fabrication and assembly (65% rework reduction in the case study for the new tolerance simulation algorithm), reduce waste during manufacturing (11% waste reduction in the case study for the new panel unfolding and nesting algorithms), improve accuracy and automation of as-built model generation (model error reduction from 50.4 mm to 5.7 mm in the case study for the new parametric BIM updating algorithms), reduce lifecycle cost for adapting industrialized buildings (15% reduction in capital costs in the computational building configurator) and reducing lifecycle impacts for reusing structural systems from industrialized buildings (between 54% to 95% reduction in average lifecycle impacts for the approach illustrated in Appendix B). From a computational standpoint, the novelty of the algorithms developed in this research can be described as follows. Complex geometric processes can be codified solely on the innate properties of geometry – that is, by parameterizing geometry and using methods such as combinatorial optimization, topology can be optimized and semantics can be automatically enriched for building assemblies. Employing the use of functional discretization (whereby continuous variable domains are converted into discrete variable domains) is shown to be highly effective for complex geometric optimization approaches. Finally, the algorithms encapsulate and balance the benefits posed by both parametric and non-parametric schemas, resulting in the ability to achieve both high representational accuracy and semantically rich information (which has previously not been achieved or demonstrated). In summary, this thesis makes several key improvements to industrialized building construction. One of the key findings is that rather than pre-emptively determining the best suited algorithm for a given process or problem, it is often more pragmatic to derive both an exact and approximate solution and then decide which is optimal to use for a given process. Generally, most tasks related to optimizing or enriching geometric models is best solved using approximate methods. To this end, this research presents a series of key techniques that can be followed to improve the temporal performance of algorithms. The new approach for developing computational algorithms and the pragmatic demonstrations for geometric optimization and enrichment are expected to bring the industry forward and solve many of the current barriers it faces

    On the minimum cardinality problem in intensity modulated radiotherapy

    Full text link
    The thesis examines an optimisation problem that appears in the treatment planning of intensity modulated radiotherapy. An approach is presented which solved the optimisation problem in question while also extending the approach to execute in a massively parallel environment. The performance of the approach presented is among the fastest available

    Laser Scanner Technology

    Get PDF
    Laser scanning technology plays an important role in the science and engineering arena. The aim of the scanning is usually to create a digital version of the object surface. Multiple scanning is sometimes performed via multiple cameras to obtain all slides of the scene under study. Usually, optical tests are used to elucidate the power of laser scanning technology in the modern industry and in the research laboratories. This book describes the recent contributions reported by laser scanning technology in different areas around the world. The main topics of laser scanning described in this volume include full body scanning, traffic management, 3D survey process, bridge monitoring, tracking of scanning, human sensing, three-dimensional modelling, glacier monitoring and digitizing heritage monuments
    • 

    corecore