25 research outputs found
Fractal methods in image analysis and coding
In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area.
The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding.
In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, and to each other. We attempt to explain why differences arise between various estimators of the same quantity. We also examine texture generation methods which use fractal dimension to generate textures of varying complexity.
We examine how fractal dimension can contribute to image segmentation methods. We also present an in-depth analysis of a novel segmentation scheme based on fractal coding.
Finally, we present an overview of fractal and wavelet image coding, and the links between the two. We examine a possible scheme involving both fractal and wavelet methods
Observations of Ly Emitters at High Redshift
In this series of lectures, I review our observational understanding of
high- Ly emitters (LAEs) and relevant scientific topics. Since the
discovery of LAEs in the late 1990s, more than ten (one) thousand(s) of LAEs
have been identified photometrically (spectroscopically) at to . These large samples of LAEs are useful to address two major astrophysical
issues, galaxy formation and cosmic reionization. Statistical studies have
revealed the general picture of LAEs' physical properties: young stellar
populations, remarkable luminosity function evolutions, compact morphologies,
highly ionized inter-stellar media (ISM) with low metal/dust contents, low
masses of dark-matter halos. Typical LAEs represent low-mass high- galaxies,
high- analogs of dwarf galaxies, some of which are thought to be candidates
of population III galaxies. These observational studies have also pinpointed
rare bright Ly sources extended over kpc, dubbed
Ly blobs, whose physical origins are under debate. LAEs are used as
probes of cosmic reionization history through the Ly damping wing
absorption given by the neutral hydrogen of the inter-galactic medium (IGM),
which complement the cosmic microwave background radiation and 21cm
observations. The low-mass and highly-ionized population of LAEs can be major
sources of cosmic reionization. The budget of ionizing photons for cosmic
reionization has been constrained, although there remain large observational
uncertainties in the parameters. Beyond galaxy formation and cosmic
reionization, several new usages of LAEs for science frontiers have been
suggested such as the distribution of {\sc Hi} gas in the circum-galactic
medium and filaments of large-scale structures. On-going programs and future
telescope projects, such as JWST, ELTs, and SKA, will push the horizons of the
science frontiers.Comment: Lecture notes for `Lyman-alpha as an Astrophysical and Cosmological
Tool', Saas-Fee Advanced Course 46. Verhamme, A., North, P., Cantalupo, S., &
Atek, H. (eds.) --- 147 pages, 103 figures. Abstract abridged. Link to the
lecture program including the video recording and ppt files :
https://obswww.unige.ch/Courses/saas-fee-2016/program.cg
Fuzzy Systems
This book presents some recent specialized works of theoretical study in the domain of fuzzy systems. Over eight sections and fifteen chapters, the volume addresses fuzzy systems concepts and promotes them in practical applications in the following thematic areas: fuzzy mathematics, decision making, clustering, adaptive neural fuzzy inference systems, control systems, process monitoring, green infrastructure, and medicine. The studies published in the book develop new theoretical concepts that improve the properties and performances of fuzzy systems. This book is a useful resource for specialists, engineers, professors, and students
Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective
Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
Proceedings of the Scientific-Practical Conference "Research and Development - 2016"
talent management; sensor arrays; automatic speech recognition; dry separation technology; oil production; oil waste; laser technolog
Distributed Real-time Systems - Deterministic Protocols for Wireless Networks and Model-Driven Development with SDL
In a networked system, the communication system is indispensable but often the weakest link w.r.t. performance and reliability. This, particularly, holds for wireless communication systems, where the error- and interference-prone medium and the character of network topologies implicate special challenges. However, there are many scenarios of wireless networks, in which a certain quality-of-service has to be provided despite these conditions. In this regard, distributed real-time systems, whose realization by wireless multi-hop networks becomes increasingly popular, are a particular challenge. For such systems, it is of crucial importance that communication protocols are deterministic and come with the required amount of efficiency and predictability, while additionally considering scarce hardware resources that are a major limiting factor of wireless sensor nodes. This, in turn, does not only place demands on the behavior of a protocol but also on its implementation, which has to comply with timing and resource constraints.
The first part of this thesis presents a deterministic protocol for wireless multi-hop networks with time-critical behavior. The protocol is referred to as Arbitrating and Cooperative Transfer Protocol (ACTP), and is an instance of a binary countdown protocol. It enables the reliable transfer of bit sequences of adjustable length and deterministically resolves contest among nodes based on a flexible priority assignment, with constant delays, and within configurable arbitration radii. The protocol's key requirement is the collision-resistant encoding of bits, which is achieved by the incorporation of black bursts. Besides revisiting black bursts and proposing measures to optimize their detection, robustness, and implementation on wireless sensor nodes, the first part of this thesis presents the mode of operation and time behavior of ACTP. In addition, possible applications of ACTP are illustrated, presenting solutions to well-known problems of distributed systems like leader election and data dissemination. Furthermore, results of experimental evaluations with customary wireless transceivers are outlined to provide evidence of the protocol's implementability and benefits.
In the second part of this thesis, the focus is shifted from concrete deterministic protocols to their model-driven development with the Specification and Description Language (SDL). Though SDL is well-established in the domain of telecommunication and distributed systems, the predictability of its implementations is often insufficient as previous projects have shown. To increase this predictability and to improve SDL's applicability to time-critical systems, real-time tasks, an approved concept in the design of real-time systems, are transferred to SDL and extended to cover node-spanning system tasks. In this regard, a priority-based execution and suspension model is introduced in SDL, which enables task-specific priority assignments in the SDL specification that are orthogonal to the static structure of SDL systems and control transition execution orders on design as well as on implementation level. Both the formal incorporation of real-time tasks into SDL and their implementation in a novel scheduling strategy are discussed in this context. By means of evaluations on wireless sensor nodes, evidence is provided that these extensions reduce worst-case execution times substantially, and improve the predictability of SDL implementations and the language's applicability to real-time systems
Recommended from our members
A Topology-Based Approach for Nonlinear Time Series with Applications in Computer Performance Analysis
We present a topology-based methodology for the analysis of experimental data generated by a discrete-time, nonlinear dynamical system. This methodology has significant applications in the field of computer performance analysis. Our approach consists of two parts. In the first part, we propose a novel signal separation algorithm that exploits the continuity of the dynamical system being studied. We use established tools from computational topology to test the connectedness of various regions of state space. In particular, a connected region of space that has a disconnected image under the experimental dynamics suggests the presence of multiple signals in the data. Using this as a guideline, we are able to model experimental data as an Iterated Function System (IFS). We demonstrate the success of our algorithm on several synthetic examples--including a Henon-like IFS. Additionally, we successfully model experimental computer performance data as an IFS. In the second part of the analysis, we represent an experimental dynamical system with an algebraic structure that allows for the computation of algebraic topological invariants. Previous work has shown that a cubical grid and the associated cubical complex are effective tools that can be used to identify isolating neighborhoods and compute the corresponding Conley Index--thereby rigorously verifying the existence of periodic orbits and/or chaotic dynamics. Our contribution is to adapt this technique by altering the underlying data structure--improving flexibility and efficiency. We represent the state space of the dynamical system with a simplicial complex and its induced simplicial multivalued map. This contains information about both geometry and dynamics, whereas the cubical complex is restricted by the geometry of the experimental data. This representation has several advantages; most notably, the complexity of the algorithm that generates the associated simplicial multivalued map is linear in the number of data points--as opposed to exponential in dimension for the cubical multivalued map. The synthesis of the two parts of our methodology results in a nonlinear time-series analysis framework that is particularly well suited for computer performance analysis. Complex computer programs naturally switch between `regimes\u27 and are appropriately modeled as IFSs by part one of our program. Part two of our methodology provides the correct tools for analyzing each regime independently
Type-2 Fuzzy Alpha-cuts
Systems that utilise type-2 fuzzy sets to handle uncertainty have not been implemented in real world applications unlike the astonishing number of applications involving standard fuzzy sets. The main reason behind this is the complex mathematical nature of type-2 fuzzy sets which is the source of two major problems. On one hand, it is difficult to mathematically manipulate type-2 fuzzy sets, and on the other, the computational cost of processing and performing operations using these sets is very high. Most of the current research carried out on type-2 fuzzy logic concentrates on finding mathematical means to overcome these obstacles. One way of accomplishing the first task is to develop a meaningful mathematical representation of type-2 fuzzy sets that allows functions and operations to be extended from well known mathematical forms to type-2 fuzzy sets. To this end, this thesis presents a novel alpha-cut representation theorem to be this meaningful mathematical representation. It is the decomposition of a type-2 fuzzy set in to a number of classical sets. The alpha-cut representation theorem is the main contribution of this thesis.
This dissertation also presents a methodology to allow functions and operations to be extended directly from classical sets to type-2 fuzzy sets. A novel alpha-cut extension principle is presented in this thesis and used to define uncertainty measures and arithmetic operations for type-2 fuzzy sets. Throughout this investigation, a plethora of concepts and definitions have been developed for the first time in order to make the manipulation of type-2 fuzzy sets a simple and straight forward task. Worked examples are used to demonstrate the usefulness of these theorems and methods.
Finally, the crisp alpha-cuts of this fundamental decomposition theorem are by definition independent of each other. This dissertation shows that operations on type-2 fuzzy sets using the alpha-cut extension principle can be processed in parallel. This feature is found to be extremely powerful, especially if performing computation on the massively parallel graphical processing units. This thesis explores this capability and shows through different experiments the achievement of significant reduction in processing time.The National Training Directorate, Republic of Suda