812 research outputs found

    Simulated Annealing Algorithm for the Linear Ordering Problem: The Case of Tanzania Input Output Tables

    Get PDF
    Linear Ordering is a problem of ordering the rows and columns of a matrix such that the sum of the upper triangle values is as large as possible. The problem has many applications including aggregation of individual preferences, weighted ancestry relationships and triangulation of input-output tables in economics. As a result, many researchers have been working on the problem which is known to be NP-hard. Consequently, heuristic algorithms have been developed and implemented on benchmark data or specific real-world applications. Simulated Annealing has seldom been used for this problem. Furthermore, only one attempt has been done on the Tanzanian input output table data. This article presents a Simulated Annealing approach to the problem and compares results with previous work on the same data using Great Deluge algorithm. Three cooling schedules are compared, namely linear, geometric and Lundy & Mees. The results show that Simulated Annealing and Great Deluge provide similar results including execution time and final solution quality. It is concluded that Simulated Annealing is a good algorithm for the Linear Ordering problem given a careful selection of required parameters. Keywords: Combinatorial Optimization; Linear Ordering Problem; Simulated Annealing; Triangulation; Input Output table

    Single Tree Detection from Airborne Laser Scanning Data: A Stochastic Approach

    Get PDF
    Characterizing and monitoring forests are of great scientific and managerial interests, such as understanding the global carbon circle, biodiversity conservation and management of natural resources. As an alternative or compliment to traditional remote sensing techniques, airborne laser scanning (ALS) has been placed in a very advantageous position in forest studies, for its unique ability to directly measure the distribution of vegetation materials in the vertical direction, as well as the terrain beneath the forest canopy. Serving as basis for tree-wise forest biophysical parameter and species information retrieval, single tree detection is a very motivating research topic in forest inventory. The objective of the study is to develop a method from the perspective of computer vision to detect single trees automatically from ALS data. For this purpose, this study explored different aspects of the problem. It starts from an improved pipeline for canopy height model (CHM) generation, which alleviates the distortion of tree crown shapes presented on CHMs resulted from conventional procedures due to the shadow effects of ALS data and produces pit-free CHM. The single tree detection method consists of a hybrid framework which integrates low-level image processing techniques, i.e. local maxima filtering (LM) and marker-controlled watershed segmentation (MCWS), into a high-level probabilistic model. In the proposed approach, tree crowns in the forest plot are modelled as a configuration of circular objects. The configuration containing the best possible set of detected tree objects is estimated by a global optimization solver in a probabilistic framework. The model features an accelerated optimization process compared with classical stochastic models, e.g. marked point processes. The parameter estimation is another issue: the study investigated both a reference-based supervised and an Expectation-Maximization (EM) based unsupervised method to estimate the parameters in the model. The model was tested in a temperate mature coniferous forest in Ontario, Canada, as well as simulated coniferous forest plots with various degrees of crown overlap. The experimental results showed the effectiveness of our proposed method, which was capable of reducing the commission errors produced by local maxima filtering based methods, thus increasing the overall detection accuracy by approximately 10% on all of the datasets

    Acta Cybernetica : Volume 21. Number 1.

    Get PDF

    Multiscale Methods in Image Modelling and Image Processing

    Get PDF
    The field of modelling and processing of 'images' has fairly recently become important, even crucial, to areas of science, medicine, and engineering. The inevitable explosion of imaging modalities and approaches stemming from this fact has become a rich source of mathematical applications. 'Imaging' is quite broad, and suffers somewhat from this broadness. The general question of 'what is an image?' or perhaps 'what is a natural image?' turns out to be difficult to address. To make real headway one may need to strongly constrain the class of images being considered, as will be done in part of this thesis. On the other hand there are general principles that can guide research in many areas. One such principle considered is the assertion that (classes of) images have multiscale relationships, whether at a pixel level, between features, or other variants. There are both practical (in terms of computational complexity) and more philosophical reasons (mimicking the human visual system, for example) that suggest looking at such methods. Looking at scaling relationships may also have the advantage of opening a problem up to many mathematical tools. This thesis will detail two investigations into multiscale relationships, in quite different areas. One will involve Iterated Function Systems (IFS), and the other a stochastic approach to reconstruction of binary images (binary phase descriptions of porous media). The use of IFS in this context, which has often been called 'fractal image coding', has been primarily viewed as an image compression technique. We will re-visit this approach, proposing it as a more general tool. Some study of the implications of that idea will be presented, along with applications inferred by the results. In the area of reconstruction of binary porous media, a novel, multiscale, hierarchical annealing approach is proposed and investigated

    Activity Analysis; Finding Explanations for Sets of Events

    Get PDF
    Automatic activity recognition is the computational process of analysing visual input and reasoning about detections to understand the performed events. In all but the simplest scenarios, an activity involves multiple interleaved events, some related and others independent. The activity in a car park or at a playground would typically include many events. This research assumes the possible events and any constraints between the events can be defined for the given scene. Analysing the activity should thus recognise a complete and consistent set of events; this is referred to as a global explanation of the activity. By seeking a global explanation that satisfies the activity’s constraints, infeasible interpretations can be avoided, and ambiguous observations may be resolved. An activity’s events and any natural constraints are defined using a grammar formalism. Attribute Multiset Grammars (AMG) are chosen because they allow defining hierarchies, as well as attribute rules and constraints. When used for recognition, detectors are employed to gather a set of detections. Parsing the set of detections by the AMG provides a global explanation. To find the best parse tree given a set of detections, a Bayesian network models the probability distribution over the space of possible parse trees. Heuristic and exhaustive search techniques are proposed to find the maximum a posteriori global explanation. The framework is tested for two activities: the activity in a bicycle rack, and around a building entrance. The first case study involves people locking bicycles onto a bicycle rack and picking them up later. The best global explanation for all detections gathered during the day resolves local ambiguities from occlusion or clutter. Intensive testing on 5 full days proved global analysis achieves higher recognition rates. The second case study tracks people and any objects they are carrying as they enter and exit a building entrance. A complete sequence of the person entering and exiting multiple times is recovered by the global explanation

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set

    High-Level Facade Image Interpretation using Marked Point Processes

    Get PDF
    In this thesis, we address facade image interpretation as one essential ingredient for the generation of high-detailed, semantic meaningful, three-dimensional city-models. Given a single rectified facade image, we detect relevant facade objects such as windows, entrances, and balconies, which yield a description of the image in terms of accurate position and size of these objects. Urban digital three-dimensional reconstruction and documentation is an active area of research with several potential applications, e.g., in the area of digital mapping for navigation, urban planning, emergency management, disaster control or the entertainment industry. A detailed building model which is not just a geometric object enriched with texture, allows for semantic requests as the number of floors or the location of balconies and entrances. Facade image interpretation is one essential step in order to yield such models. In this thesis, we propose the interpretation of facade images by combining evidence for the occurrence of individual object classes which we derive from data, and prior knowledge which guides the image interpretation in its entirety. We present a three-step procedure which generates features that are suited to describe relevant objects, learns a representation that is suited for object detection, and that enables the image interpretation using the results of object detection while incorporating prior knowledge about typical configurations of facade objects, which we learn from training data. According to these three sub-tasks, our major achievements are: We propose a novel method for facade image interpretation based on a marked point process. Therefor, we develop a model for the description of typical configurations of facade objects and propose an image interpretation system which combines evidence derived from data and prior knowledge about typical configurations of facade objects. In order to generate evidence from data, we propose a feature type which we call shapelets. They are scale invariant and provide large distinctiveness for facade objects. Segments of lines, arcs, and ellipses serve as basic features for the generation of shapelets. Therefor, we propose a novel line simplification approach which approximates given pixel-chains by a sequence of lines, circular, and elliptical arcs. Among others, it is based on an adaption to Douglas-Peucker's algorithm, which is based on circles as basic geometric elements We evaluate each step separately. We show the effects of polyline segmentation and simplification on several images with comparable good or even better results, referring to a state-of-the-art algorithm, which proves their large distinctiveness for facade objects. Using shapelets we provide a reasonable classification performance on a challenging dataset, including intra-class variations, clutter, and scale changes. Finally, we show promising results for the facade interpretation system on several datasets and provide a qualitative evaluation which demonstrates the capability of complete and accurate detection of facade objectsHigh-Level Interpretation von Fassaden-Bildern unter Benutzung von Markierten PunktprozessenDas Thema dieser Arbeit ist die Interpretation von Fassadenbildern als wesentlicher Beitrag zur Erstellung hoch detaillierter, semantisch reichhaltiger dreidimensionaler Stadtmodelle. In rektifizierten Einzelaufnahmen von Fassaden detektieren wir relevante Objekte wie Fenster, TĂŒren und Balkone, um daraus eine Bildinterpretation in Form von prĂ€zisen Positionen und GrĂ¶ĂŸen dieser Objekte abzuleiten. Die digitale dreidimensionale Rekonstruktion urbaner Regionen ist ein aktives Forschungsfeld mit zahlreichen Anwendungen, beispielsweise der Herstellung digitaler Kartenwerke fĂŒr Navigation, Stadtplanung, Notfallmanagement, Katastrophenschutz oder die Unterhaltungsindustrie. Detaillierte GebĂ€udemodelle, die nicht nur als geometrische Objekte reprĂ€sentiert und durch eine geeignete Textur visuell ansprechend dargestellt werden, erlauben semantische Anfragen, wie beispielsweise nach der Anzahl der Geschosse oder der Position der Balkone oder EingĂ€nge. Die semantische Interpretation von Fassadenbildern ist ein wesentlicher Schritt fĂŒr die Erzeugung solcher Modelle. In der vorliegenden Arbeit lösen wir diese Aufgabe, indem wir aus Daten abgeleitete Evidenz fĂŒr das Vorkommen einzelner Objekte mit Vorwissen kombinieren, das die Analyse der gesamten Bildinterpretation steuert. Wir prĂ€sentieren dafĂŒr ein dreistufiges Verfahren: Wir erzeugen Bildmerkmale, die fĂŒr die Beschreibung der relevanten Objekte geeignet sind. Wir lernen, auf Basis abgeleiteter Merkmale, eine ReprĂ€sentation dieser Objekte. Schließlich realisieren wir die Bildinterpretation basierend auf der zuvor gelernten ReprĂ€sentation und dem Vorwissen ĂŒber typische Konfigurationen von Fassadenobjekten, welches wir aus Trainingsdaten ableiten. Wir leisten dazu die folgenden wissenschaftlichen BeitrĂ€ge: Wir schlagen eine neuartige Me-thode zur Interpretation von Fassadenbildern vor, die einen sogenannten markierten Punktprozess verwendet. DafĂŒr entwickeln wir ein Modell zur Beschreibung typischer Konfigurationen von Fassadenobjekten und entwickeln ein Bildinterpretationssystem, welches aus Daten abgeleitete Evidenz und a priori Wissen ĂŒber typische Fassadenkonfigurationen kombiniert. FĂŒr die Erzeugung der Evidenz stellen wir Merkmale vor, die wir Shapelets nennen und die skaleninvariant und durch eine ausgesprochene DistinktivitĂ€t im Bezug auf Fassadenobjekte gekennzeichnet sind. Als Basismerkmale fĂŒr die Erzeugung der Shapelets dienen Linien-, Kreis- und Ellipsensegmente. DafĂŒr stellen wir eine neuartige Methode zur Vereinfachung von Liniensegmenten vor, die eine Pixelkette durch eine Sequenz von geraden LinienstĂŒcken und elliptischen Bogensegmenten approximiert. Diese basiert unter anderem auf einer Adaption des Douglas-Peucker Algorithmus, die anstelle gerader LinienstĂŒcke, Bogensegmente als geometrische Basiselemente verwendet. Wir evaluieren jeden dieser drei Teilschritte separat. Wir zeigen Ergebnisse der Liniensegmen-tierung anhand verschiedener Bilder und weisen dabei vergleichbare und teilweise verbesserte Ergebnisse im Vergleich zu bestehende Verfahren nach. FĂŒr die vorgeschlagenen Shapelets weisen wir in der Evaluation ihre diskriminativen Eigenschaften im Bezug auf Fassadenobjekte nach. Wir erzeugen auf einem anspruchsvollen Datensatz von skalenvariablen Fassadenobjekten, mit starker VariabilitĂ€t der Erscheinung innerhalb der Klassen, vielversprechende Klassifikationsergebnisse, die die Verwendbarkeit der gelernten Shapelets fĂŒr die weitere Interpretation belegen. Schließlich zeigen wir Ergebnisse der Interpretation der Fassadenstruktur anhand verschiedener DatensĂ€tze. Die qualitative Evaluation demonstriert die FĂ€higkeit des vorgeschlagenen Lösungsansatzes zur vollstĂ€ndigen und prĂ€zisen Detektion der genannten Fassadenobjekte

    On deep learning in physics

    Get PDF
    Machine learning, and most notably deep neural networks, have seen unprecedented success in recent years due to their ability to learn complex nonlinear mappings by ingesting large amounts of data through the process of training. This learning-by-example approach has slowly made its way into the physical sciences in recent years. In this dissertation I present a collection of contributions at the intersection of the fields of physics and deep learning. These contributions constitute some of the earlier introductions of deep learning to the physical sciences, and comprises a range of machine learning techniques, such as feed forward neural networks, generative models, and reinforcement learning. A focus will be placed on the lessons and techniques learned along the way that would influence future research projects

    Parallel and Distributed Computing

    Get PDF
    The 14 chapters presented in this book cover a wide variety of representative works ranging from hardware design to application development. Particularly, the topics that are addressed are programmable and reconfigurable devices and systems, dependability of GPUs (General Purpose Units), network topologies, cache coherence protocols, resource allocation, scheduling algorithms, peertopeer networks, largescale network simulation, and parallel routines and algorithms. In this way, the articles included in this book constitute an excellent reference for engineers and researchers who have particular interests in each of these topics in parallel and distributed computing
    • 

    corecore