25 research outputs found

    Fractal methods in image analysis and coding

    Get PDF
    In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, and to each other. We attempt to explain why differences arise between various estimators of the same quantity. We also examine texture generation methods which use fractal dimension to generate textures of varying complexity. We examine how fractal dimension can contribute to image segmentation methods. We also present an in-depth analysis of a novel segmentation scheme based on fractal coding. Finally, we present an overview of fractal and wavelet image coding, and the links between the two. We examine a possible scheme involving both fractal and wavelet methods

    Observations of Lyα\alpha Emitters at High Redshift

    Full text link
    In this series of lectures, I review our observational understanding of high-zz Lyα\alpha emitters (LAEs) and relevant scientific topics. Since the discovery of LAEs in the late 1990s, more than ten (one) thousand(s) of LAEs have been identified photometrically (spectroscopically) at z0z\sim 0 to z10z\sim 10. These large samples of LAEs are useful to address two major astrophysical issues, galaxy formation and cosmic reionization. Statistical studies have revealed the general picture of LAEs' physical properties: young stellar populations, remarkable luminosity function evolutions, compact morphologies, highly ionized inter-stellar media (ISM) with low metal/dust contents, low masses of dark-matter halos. Typical LAEs represent low-mass high-zz galaxies, high-zz analogs of dwarf galaxies, some of which are thought to be candidates of population III galaxies. These observational studies have also pinpointed rare bright Lyα\alpha sources extended over 10100\sim 10-100 kpc, dubbed Lyα\alpha blobs, whose physical origins are under debate. LAEs are used as probes of cosmic reionization history through the Lyα\alpha damping wing absorption given by the neutral hydrogen of the inter-galactic medium (IGM), which complement the cosmic microwave background radiation and 21cm observations. The low-mass and highly-ionized population of LAEs can be major sources of cosmic reionization. The budget of ionizing photons for cosmic reionization has been constrained, although there remain large observational uncertainties in the parameters. Beyond galaxy formation and cosmic reionization, several new usages of LAEs for science frontiers have been suggested such as the distribution of {\sc Hi} gas in the circum-galactic medium and filaments of large-scale structures. On-going programs and future telescope projects, such as JWST, ELTs, and SKA, will push the horizons of the science frontiers.Comment: Lecture notes for `Lyman-alpha as an Astrophysical and Cosmological Tool', Saas-Fee Advanced Course 46. Verhamme, A., North, P., Cantalupo, S., & Atek, H. (eds.) --- 147 pages, 103 figures. Abstract abridged. Link to the lecture program including the video recording and ppt files : https://obswww.unige.ch/Courses/saas-fee-2016/program.cg

    Extreme Value Theory of geophysical flows

    Get PDF

    Fuzzy Systems

    Get PDF
    This book presents some recent specialized works of theoretical study in the domain of fuzzy systems. Over eight sections and fifteen chapters, the volume addresses fuzzy systems concepts and promotes them in practical applications in the following thematic areas: fuzzy mathematics, decision making, clustering, adaptive neural fuzzy inference systems, control systems, process monitoring, green infrastructure, and medicine. The studies published in the book develop new theoretical concepts that improve the properties and performances of fuzzy systems. This book is a useful resource for specialists, engineers, professors, and students

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Proceedings of the Scientific-Practical Conference "Research and Development - 2016"

    Get PDF
    talent management; sensor arrays; automatic speech recognition; dry separation technology; oil production; oil waste; laser technolog

    Distributed Real-time Systems - Deterministic Protocols for Wireless Networks and Model-Driven Development with SDL

    Get PDF
    In a networked system, the communication system is indispensable but often the weakest link w.r.t. performance and reliability. This, particularly, holds for wireless communication systems, where the error- and interference-prone medium and the character of network topologies implicate special challenges. However, there are many scenarios of wireless networks, in which a certain quality-of-service has to be provided despite these conditions. In this regard, distributed real-time systems, whose realization by wireless multi-hop networks becomes increasingly popular, are a particular challenge. For such systems, it is of crucial importance that communication protocols are deterministic and come with the required amount of efficiency and predictability, while additionally considering scarce hardware resources that are a major limiting factor of wireless sensor nodes. This, in turn, does not only place demands on the behavior of a protocol but also on its implementation, which has to comply with timing and resource constraints. The first part of this thesis presents a deterministic protocol for wireless multi-hop networks with time-critical behavior. The protocol is referred to as Arbitrating and Cooperative Transfer Protocol (ACTP), and is an instance of a binary countdown protocol. It enables the reliable transfer of bit sequences of adjustable length and deterministically resolves contest among nodes based on a flexible priority assignment, with constant delays, and within configurable arbitration radii. The protocol's key requirement is the collision-resistant encoding of bits, which is achieved by the incorporation of black bursts. Besides revisiting black bursts and proposing measures to optimize their detection, robustness, and implementation on wireless sensor nodes, the first part of this thesis presents the mode of operation and time behavior of ACTP. In addition, possible applications of ACTP are illustrated, presenting solutions to well-known problems of distributed systems like leader election and data dissemination. Furthermore, results of experimental evaluations with customary wireless transceivers are outlined to provide evidence of the protocol's implementability and benefits. In the second part of this thesis, the focus is shifted from concrete deterministic protocols to their model-driven development with the Specification and Description Language (SDL). Though SDL is well-established in the domain of telecommunication and distributed systems, the predictability of its implementations is often insufficient as previous projects have shown. To increase this predictability and to improve SDL's applicability to time-critical systems, real-time tasks, an approved concept in the design of real-time systems, are transferred to SDL and extended to cover node-spanning system tasks. In this regard, a priority-based execution and suspension model is introduced in SDL, which enables task-specific priority assignments in the SDL specification that are orthogonal to the static structure of SDL systems and control transition execution orders on design as well as on implementation level. Both the formal incorporation of real-time tasks into SDL and their implementation in a novel scheduling strategy are discussed in this context. By means of evaluations on wireless sensor nodes, evidence is provided that these extensions reduce worst-case execution times substantially, and improve the predictability of SDL implementations and the language's applicability to real-time systems

    Type-2 Fuzzy Alpha-cuts

    Get PDF
    Systems that utilise type-2 fuzzy sets to handle uncertainty have not been implemented in real world applications unlike the astonishing number of applications involving standard fuzzy sets. The main reason behind this is the complex mathematical nature of type-2 fuzzy sets which is the source of two major problems. On one hand, it is difficult to mathematically manipulate type-2 fuzzy sets, and on the other, the computational cost of processing and performing operations using these sets is very high. Most of the current research carried out on type-2 fuzzy logic concentrates on finding mathematical means to overcome these obstacles. One way of accomplishing the first task is to develop a meaningful mathematical representation of type-2 fuzzy sets that allows functions and operations to be extended from well known mathematical forms to type-2 fuzzy sets. To this end, this thesis presents a novel alpha-cut representation theorem to be this meaningful mathematical representation. It is the decomposition of a type-2 fuzzy set in to a number of classical sets. The alpha-cut representation theorem is the main contribution of this thesis. This dissertation also presents a methodology to allow functions and operations to be extended directly from classical sets to type-2 fuzzy sets. A novel alpha-cut extension principle is presented in this thesis and used to define uncertainty measures and arithmetic operations for type-2 fuzzy sets. Throughout this investigation, a plethora of concepts and definitions have been developed for the first time in order to make the manipulation of type-2 fuzzy sets a simple and straight forward task. Worked examples are used to demonstrate the usefulness of these theorems and methods. Finally, the crisp alpha-cuts of this fundamental decomposition theorem are by definition independent of each other. This dissertation shows that operations on type-2 fuzzy sets using the alpha-cut extension principle can be processed in parallel. This feature is found to be extremely powerful, especially if performing computation on the massively parallel graphical processing units. This thesis explores this capability and shows through different experiments the achievement of significant reduction in processing time.The National Training Directorate, Republic of Suda
    corecore