105 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Graph Sphere: From Nodes to Supernodes in Graphical Models

    Full text link
    High-dimensional data analysis typically focuses on low-dimensional structure, often to aid interpretation and computational efficiency. Graphical models provide a powerful methodology for learning the conditional independence structure in multivariate data by representing variables as nodes and dependencies as edges. Inference is often focused on individual edges in the latent graph. Nonetheless, there is increasing interest in determining more complex structures, such as communities of nodes, for multiple reasons, including more effective information retrieval and better interpretability. In this work, we propose a multilayer graphical model where we first cluster nodes and then, at the second layer, investigate the relationships among groups of nodes. Specifically, nodes are partitioned into "supernodes" with a data-coherent size-biased tessellation prior which combines ideas from Bayesian nonparametrics and Voronoi tessellations. This construct allows accounting also for dependence of nodes within supernodes. At the second layer, dependence structure among supernodes is modelled through a Gaussian graphical model, where the focus of inference is on "superedges". We provide theoretical justification for our modelling choices. We design tailored Markov chain Monte Carlo schemes, which also enable parallel computations. We demonstrate the effectiveness of our approach for large-scale structure learning in simulations and a transcriptomics application.Comment: 71 pages, 18 figure

    On approximating the stochastic behaviour of Markovian process algebra models

    Get PDF
    Markov chains offer a rigorous mathematical framework to describe systems that exhibit stochastic behaviour, as they are supported by a plethora of methodologies to analyse their properties. Stochastic process algebras are high-level formalisms, where systems are represented as collections of interacting components. This compositional approach to modelling allows us to describe complex Markov chains using a compact high-level specification. There is an increasing need to investigate the properties of complex systems, not only in the field of computer science, but also in computational biology. To explore the stochastic properties of large Markov chains is a demanding task in terms of computational resources. Approximating the stochastic properties can be an effective way to deal with the complexity of large models. In this thesis, we investigate methodologies to approximate the stochastic behaviour of Markovian process algebra models. The discussion revolves around two main topics: approximate state-space aggregation and stochastic simulation. Although these topics are different in nature, they are both motivated by the need to efficiently handle complex systems. Approximate Markov chain aggregation constitutes the formulation of a smaller Markov chain that approximates the behaviour of the original model. The principal hypothesis is that states that can be characterised as equivalent can be adequately represented as a single state. We discuss different notions of approximate state equivalence, and how each of these can be used as a criterion to partition the state-space accordingly. Nevertheless, approximate aggregation methods typically require an explicit representation of the transition matrix, a fact that renders them impractical for large models. We propose a compositional approach to aggregation, as a means to efficiently approximate complex Markov models that are defined in a process algebra specification, PEPA in particular. Regarding our contributions to Markov chain simulation, we propose an accelerated method that can be characterised as almost exact, in the sense that it can be arbitrarily precise. We discuss how it is possible to sample from the trajectory space rather than the transition space. This approach requires fewer random samples than a typical simulation algorithm. Most importantly, our approach does not rely on particular assumptions with respect to the model properties, in contrast to otherwise more efficient approaches

    Textural Difference Enhancement based on Image Component Analysis

    Get PDF
    In this thesis, we propose a novel image enhancement method to magnify the textural differences in the images with respect to human visual characteristics. The method is intended to be a preprocessing step to improve the performance of the texture-based image segmentation algorithms. We propose to calculate the six Tamura's texture features (coarseness, contrast, directionality, line-likeness, regularity and roughness) in novel measurements. Each feature follows its original understanding of the certain texture characteristic, but is measured by some local low-level features, e.g., direction of the local edges, dynamic range of the local pixel intensities, kurtosis and skewness of the local image histogram. A discriminant texture feature selection method based on principal component analysis (PCA) is then proposed to find the most representative characteristics in describing textual differences in the image. We decompose the image into pairwise components representing the texture characteristics strongly and weakly, respectively. A set of wavelet-based soft thresholding methods are proposed as the dictionaries of morphological component analysis (MCA) to sparsely highlight the characteristics strongly and weakly from the image. The wavelet-based thresholding methods are proposed in pair, therefore each of the resulted pairwise components can exhibit one certain characteristic either strongly or weakly. We propose various wavelet-based manipulation methods to enhance the components separately. For each component representing a certain texture characteristic, a non-linear function is proposed to manipulate the wavelet coefficients of the component so that the component is enhanced with the corresponding characteristic accentuated independently while having little effect on other characteristics. Furthermore, the above three methods are combined into a uniform framework of image enhancement. Firstly, the texture characteristics differentiating different textures in the image are found. Secondly, the image is decomposed into components exhibiting these texture characteristics respectively. Thirdly, each component is manipulated to accentuate the corresponding texture characteristics exhibited there. After re-combining these manipulated components, the image is enhanced with the textural differences magnified with respect to the selected texture characteristics. The proposed textural differences enhancement method is used prior to both grayscale and colour image segmentation algorithms. The convincing results of improving the performance of different segmentation algorithms prove the potential of the proposed textural difference enhancement method

    Textural Difference Enhancement based on Image Component Analysis

    Get PDF
    In this thesis, we propose a novel image enhancement method to magnify the textural differences in the images with respect to human visual characteristics. The method is intended to be a preprocessing step to improve the performance of the texture-based image segmentation algorithms. We propose to calculate the six Tamura's texture features (coarseness, contrast, directionality, line-likeness, regularity and roughness) in novel measurements. Each feature follows its original understanding of the certain texture characteristic, but is measured by some local low-level features, e.g., direction of the local edges, dynamic range of the local pixel intensities, kurtosis and skewness of the local image histogram. A discriminant texture feature selection method based on principal component analysis (PCA) is then proposed to find the most representative characteristics in describing textual differences in the image. We decompose the image into pairwise components representing the texture characteristics strongly and weakly, respectively. A set of wavelet-based soft thresholding methods are proposed as the dictionaries of morphological component analysis (MCA) to sparsely highlight the characteristics strongly and weakly from the image. The wavelet-based thresholding methods are proposed in pair, therefore each of the resulted pairwise components can exhibit one certain characteristic either strongly or weakly. We propose various wavelet-based manipulation methods to enhance the components separately. For each component representing a certain texture characteristic, a non-linear function is proposed to manipulate the wavelet coefficients of the component so that the component is enhanced with the corresponding characteristic accentuated independently while having little effect on other characteristics. Furthermore, the above three methods are combined into a uniform framework of image enhancement. Firstly, the texture characteristics differentiating different textures in the image are found. Secondly, the image is decomposed into components exhibiting these texture characteristics respectively. Thirdly, each component is manipulated to accentuate the corresponding texture characteristics exhibited there. After re-combining these manipulated components, the image is enhanced with the textural differences magnified with respect to the selected texture characteristics. The proposed textural differences enhancement method is used prior to both grayscale and colour image segmentation algorithms. The convincing results of improving the performance of different segmentation algorithms prove the potential of the proposed textural difference enhancement method

    Long-term nutrient load management and lake restoration: Case of Säkylän Pyhäjärvi (SW Finland)

    Get PDF
    Eutrophication caused by anthropogenic nutrient pollution has become one of the most severe threats to water bodies. Nutrients enter water bodies from atmospheric precipitation, industrial and domestic wastewaters and surface runoff from agricultural and forest areas. As point pollution has been significantly reduced in developed countries in recent decades, agricultural non-point sources have been increasingly identified as the largest source of nutrient loading in water bodies. In this study, Lake Säkylän Pyhäjärvi and its catchment are studied as an example of a long-term, voluntary-based, co-operative model of lake and catchment management. Lake Pyhäjärvi is located in the centre of an intensive agricultural area in southwestern Finland. More than 20 professional fishermen operate in the lake area, and the lake is used as a drinking water source and for various recreational activities. Lake Pyhäjärvi is a good example of a large and shallow lake that suffers from eutrophication and is subject to measures to improve this undesired state under changing conditions. Climate change is one of the most important challenges faced by Lake Pyhäjärvi and other water bodies. The results show that climatic variation affects the amounts of runoff and nutrient loading and their timing during the year. The findings from the study area concerning warm winters and their influences on nutrient loading are in accordance with the IPCC scenarios of future climate change. In addition to nutrient reduction measures, the restoration of food chains (biomanipulation) is a key method in water quality management. The food-web structure in Lake Pyhäjärvi has, however, become disturbed due to mild winters, short ice cover and low fish catch. Ice cover that enables winter seining is extremely important to the water quality and ecosystem of Lake Pyhäjärvi, as the vendace stock is one of the key factors affecting the food web and the state of the lake. New methods for the reduction of nutrient loading and the treatment of runoff waters from agriculture, such as sand filters, were tested in field conditions. The results confirm that the filter technique is an applicable method for nutrient reduction, but further development is needed. The ability of sand filters to absorb nutrients can be improved with nutrient binding compounds, such as lime. Long-term hydrological, chemical and biological research and monitoring data on Lake Pyhäjärvi and its catchment provide a basis for water protection measures and improve our understanding of the complicated physical, chemical and biological interactions between the terrestrial and aquatic realms. In addition to measurements carried out in field conditions, Lake Pyhäjärvi and its catchment were studied using various modelling methods. In the calibration and validation of models, long-term and wide-ranging time series data proved to be valuable. Collaboration between researchers, modellers and local water managers further improves the reliability and usefulness of models. Lake Pyhäjärvi and its catchment can also be regarded as a good research laboratory from the point of view of the Baltic Sea. The main problem in both of them is eutrophication caused by excess nutrients, and nutrient loading has to be reduced – especially from agriculture. Mitigation measures are also similar in both cases.Ihmisen aiheuttamasta ravinnekuormituksesta johtuva rehevöityminen on yksi pahimmista vesistöjä uhkaavista ilmiöistä. Ravinteet kulkeutuvat vesiin ilmalaskeumana, teollisuuden ja yhdyskuntien jätevesissä sekä maatalous- ja metsäalueilta tulevissa valumavesissä. Kehittyneissä maissa pistekuormitus on merkittävästi vähentynyt viime vuosikymmeninä, ja hajakuormituksen, erityisesti maatalouden, on todettu olevan merkittävin vesistöjen ravinnekuormittaja. Tässä tutkimuksessa Säkylän Pyhäjärveä ja sen valuma-aluetta käytetään esimerkkinä pitkäjänteisestä, vapaaehtoisuuteen perustuvasta yhteistyömallista järven ja valuma-alueen vesien tilan parantamiseksi. Pyhäjärvi sijaitsee Lounais-Suomen intensiivisesti viljellyllä alueella. Järvellä toimii yli 20 ammattikalastajaa, sen vettä käytetään raakavetenä ja myös virkistyskäyttö on monipuolista ja intensiivistä. Pyhäjärvi on erinomainen esimerkki isosta, matalasta rehevöitymisen oireista kärsivästä järvestä, jonka tilaa pyritään määrätietoisesti parantamaan muuttuvissa olosuhteissa. Ilmastonmuutos on yksi suurimmista vesiensuojelun haasteista niin Pyhäjärvellä kuin muissakin vesistöissä. Tulokset osoittavat, että ilmastollinen vaihtelu vaikuttaa valunnan ja ravinnekuormituksen määriin sekä niiden vuodenaikaisuuksiin. Havainnot tutkimusalueelta koskien lämpimien talvien vaikutusta ravinnekuormitukseen ovat linjassa IPCC:n ilmastonmuutosskenaarioiden kanssa. Paitsi ravinnekuormituksen vähentäminen, myös ravintoketjukunnostus (biomanipulaatio) on keskeinen keino veden laadun hallinnassa. Ravintoketjun rakenne on kuitenkin häiriintynyt leutojen talvien, lyhyen jääpeiteajan ja vähäisen kalansaaliin vuoksi. Talvinuottauksen mahdollistavalla jääpeitteellä ja sen pituudella on suuri merkitys Pyhäjärven veden laadun ja ekosysteemin kannalta, sillä muikkukanta on yksi ravintoketjua ja järven tilaa säätelevistä tekijöistä. Ravinnekuormituksen vähentämiseksi ja maatalouden valumavesien käsittelemiseksi kehitettyjä uusia menetelmiä, esimerkiksi hiekkasuodattimia, on testattu kenttäolosuhteissa. Suodatintekniikka osoittautui käyttökelpoiseksi menetelmäksi ravinteiden vähentämiseksi, mutta kehitystyötä on edelleen jatkettava. Hiekkasuodattimien ravinteiden poistoa voidaan tehostaa erilaisilla ravinteita sitovilla yhdisteillä, esimerkiksi kalkkipohjaisilla materiaaleilla. Pyhäjärven ja sen valuma-alueen pitkäkestoiset hydrologiset, kemialliset ja biologiset seuranta- ja tutkimusaikasarjat ovat vesiensuojelun perusta ja niiden avulla lisätään ymmärrystä monimutkaisista järven ja valuma-alueen fysikaalisista, kemiallisista ja ekologisista vuorovaikutussuhteista. Kenttäolosuhteissa tehtyjen mittausten lisäksi Pyhäjärveä ja sen valuma-aluetta on tutkittu erilaisilla mallinnusmenetelmillä. Mallien kalibroinnissa ja validoinnissa pitkät ja monipuoliset aikasarjat osoittautuivat arvokkaiksi. Mallintajien, tutkijoiden ja käytännön vesiensuojelun toteuttajien yhteistyöllä voidaan edelleen parantaa mallien luotettavuutta ja hyödynnettävyyttä. Pyhäjärveä valuma-alueineen voidaan tarkastella myös Itämeren kaltaisena luonnonlaboratoriona. Ylimääräisten ravinteiden aiheuttama rehevöityminen on molempien ongelma, ja ravinnekuormitusta on molemmissa tapauksissa vähennettävä – erityisesti maataloudesta. Vähentämismenetelmät ovat niin ikään samoja.Siirretty Doriast

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described

    A method for system of systems definition and modeling using patterns of collective behavior

    Get PDF
    The Department of Defense ship and aircraft acquisition process, with its capability-based assessments and fleet synthesis studies, relies heavily on the assumption that a functional decomposition of higher-level system of systems (SoS) capabilities into lower-level system and subsystem behaviors is both possible and practical. However, SoS typically exhibit “non-decomposable” behaviors (also known as emergent behaviors) for which no widely-accepted representation exists. The presence of unforeseen emergent behaviors, particularly undesirable ones, can make systems vulnerable to attacks, hacks, or other exploitation, or can cause delays in acquisition program schedules and cost overruns in order to mitigate them. The International Council on Systems Engineering has identified the development of methods for predicting and managing emergent behaviors as one of the top research priorities for the Systems Engineering profession. Therefore, this thesis develops a method for rendering quantifiable SoS emergent properties and behaviors traceable to patterns of interaction of their constitutive systems, so that exploitable patterns identified during the early stages of design can be accounted for. This method is designed to fill two gaps in the literature. First, the lack of an approach for mining data to derive a model (i.e. an equation) of the non-decomposable behavior. Second, the lack of an approach for qualitatively and quantitatively associating emergent behaviors with the components that cause the behavior. A definition for emergent behavior is synthesized from the literature, as well as necessary conditions for its identification. An ontology of emergence that enables studying the emergent behaviors exhibited by self-organized systems via numerical simulations is adapted for this thesis in order to develop the mathematical approach needed to satisfy the research objective. Within the confines of two carefully qualified assumptions (that the model is valid, and that the model is efficient), it is argued that simulated emergence is bona-fide emergence, and that simulations can be used for experimentation without sacrificing rigor. This thesis then puts forward three hypotheses: The first hypothesis is that self-organized structures imply the presence of a form of data compression, and this compression can be used to explicitly calculate an upper bound on the number of emergent behaviors that a system can possess. The second hypothesis is that the set of numerical criteria for detecting emergent behavior derived in this research constitutes sufficient conditions for identifying weak and functional emergent behaviors. The third hypothesis states that affecting the emergent properties of these systems will have a bigger impact on the system’s performance than affecting any single component of that system. Using the method developed in this thesis, exploitable properties are identified and component behaviors are modified to attempt the exploit. Changes in performance are evaluated using problem-specific measures of merit. The experiments find that Hypothesis 2 is false (the numerical criteria are not sufficient conditions) by identifying instances where the numerical criteria produce a false-positive. As a result, a set of sufficient conditions for emergent behavior identification remains to be found. Hypothesis 1 was also falsified based on a worst-case scenario where the largest possible number of obtainable emergent behaviors was compared against the upper bound computed from the smallest possible data compression of a self-organized system. Hypothesis 3, on the other hand, was supported, as it was found that new behavior rules based on component-level properties provided less improvement to performance against an adversary than rules based on system-level properties. Overall, the method is shown to be an effective, systematic approach to non-decomposable behavior exploitation, and an improvement over the modern, largely ad hoc approach.Ph.D

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field
    corecore