76 research outputs found

    MACRO-PRUDENTIAL POLICY AND SYSTEMIC RISK: A STRUCTURAL APPROACH

    Get PDF
    The aim of this paper is to provide a structural review by analysing aspects of the relationship between prudential policy and systemic risk. It addresses the current research challenges associated with a lack of macro-prudential policy formalisation, guidance regarding its implementation and effectiveness measurement. Given the rising levels of interconnectedness between financial markets, the paper addresses the potential contagion or spill-over effects that foster change in systemic risk, especially in the case of market size differences. Finally, the paper discusses challenges associated with macro versus micro-prudential policy implementation, addressing difficulties in the measurement of systemic risk.Key words: systemic risk, macro-prudential policy, financial stability, contagion.DOI: http://dx.doi.org/10.15181/tbb.v83i2.206

    On Distributed Storage Codes

    Get PDF
    Distributed storage systems are studied. The interest in such system has become relatively wide due to the increasing amount of information needed to be stored in data centers or different kinds of cloud systems. There are many kinds of solutions for storing the information into distributed devices regarding the needs of the system designer. This thesis studies the questions of designing such storage systems and also fundamental limits of such systems. Namely, the subjects of interest of this thesis include heterogeneous distributed storage systems, distributed storage systems with the exact repair property, and locally repairable codes. For distributed storage systems with either functional or exact repair, capacity results are proved. In the case of locally repairable codes, the minimum distance is studied. Constructions for exact-repairing codes between minimum bandwidth regeneration (MBR) and minimum storage regeneration (MSR) points are given. These codes exceed the time-sharing line of the extremal points in many cases. Other properties of exact-regenerating codes are also studied. For the heterogeneous setup, the main result is that the capacity of such systems is always smaller than or equal to the capacity of a homogeneous system with symmetric repair with average node size and average repair bandwidth. A randomized construction for a locally repairable code with good minimum distance is given. It is shown that a random linear code of certain natural type has a good minimum distance with high probability. Other properties of locally repairable codes are also studied.Siirretty Doriast

    Big Data in Finance: Highlights from the Big Data in Finance Conference Hosted at the University of Michigan October 27-28, 2016

    Get PDF
    How can financial data be made more accessible and more secure, as well as more useful to regulators, market participants, and the public? As new data sets are created, opportunities emerge. Vast quantities of financial data may help identify emerging risks, enable market participants and regulators to see and better understand financial networks and interconnections, enhance financial stability, bolster consumer protection, and increase access to the underserved. Data can also increase transparency in the financial system for market participants, regulators and the public. These data sets, however, can raise significant questions about security and privacy; ensuring data quality; protecting against discrimination or privacy intrusions; managing, synthesizing, presenting, and analyzing data in usable form; and sharing data among regulators, researchers, and the public. Moreover, any conflicts among regulators and financial firms over such data could create opportunities for regulatory arbitrage and gaps in understanding risk in the financial system. The Big Data in Finance Conference, co-sponsored by the federal Office of Financial Research and the University of Michigan Center on Finance, Law, and Policy, and held at the University of Michigan Law School on October 27-28, 2016, covered a number of important and timely topics in the worlds of Big Data and finance. This paper highlights several key issues and conference takeaways as originally presented by the contributors and panelists who took part

    Macroprudential oversight, risk communication and visualization

    Get PDF
    This paper discusses the role of risk communication in macroprudential oversight and of visualization in risk communication. Beyond the soar in data availability and precision, the transition from firm-centric to system-wide supervision imposes vast data needs. Moreover, except for internal communication as in any organization, broad and effective external communication of timely information related to systemic risks is a key mandate of macroprudential supervisors, further stressing the importance of simple representations of complex data. This paper focuses on the background and theory of information visualization and visual analytics, as well as techniques within these fields, as potential means for risk communication. We define the task of visualization in risk communication, discuss the structure of macroprudential data, and review visualization techniques applied to systemic risk. We conclude that two essential, yet rare, features for supporting the analysis of big data and communication of risks are analytical visualizations and interactive interfaces. For visualizing the so-called macroprudential data cube, we provide the VisRisk platform with three modules: plots, maps and networks. While VisRisk is herein illustrated with five web-based interactive visualizations of systemic risk indicators and models, the platform enables and is open to the visualization of any data from the macroprudential data cube

    Computational methods and tools for protein phosphorylation analysis

    Get PDF
    Signaling pathways represent a central regulatory mechanism of biological systems where a key event in their correct functioning is the reversible phosphorylation of proteins. Protein phosphorylation affects at least one-third of all proteins and is the most widely studied posttranslational modification. Phosphorylation analysis is still perceived, in general, as difficult or cumbersome and not readily attempted by many, despite the high value of such information. Specifically, determining the exact location of a phosphorylation site is currently considered a major hurdle, thus reliable approaches are necessary for the detection and localization of protein phosphorylation. The goal of this PhD thesis was to develop computation methods and tools for mass spectrometry-based protein phosphorylation analysis, particularly validation of phosphorylation sites. In the first two studies, we developed methods for improved identification of phosphorylation sites in MALDI-MS. In the first study it was achieved through the automatic combination of spectra from multiple matrices, while in the second study, an optimized protocol for sample loading and washing conditions was suggested. In the third study, we proposed and evaluated the hypothesis that in ESI-MS, tandem CID and HCD spectra of phosphopeptides can be accurately predicted and used in spectral library searching. This novel strategy for phosphosite validation and identification offered accuracy that outperformed the other currently existing popular methods and proved applicable to complex biological samples. And finally, we significantly improved the performance of our command-line prototype tool, added graphical user interface, and options for customizable simulation parameters and filtering of selected spectra, peptides or proteins. The new software, SimPhospho, is open-source and can be easily integrated in a phosphoproteomics data analysis workflow. Together, these bioinformatics methods and tools enable confident phosphosite assignment and improve reliable phosphoproteome identification and reportin

    Effects of regular use of scalable, technology enhanced solution for primary mathematics education

    Get PDF
    Mathematics is one of the key subjects in any school curriculum and most teachers agree that mathematical skills are important for students to master. There is an abundance of research in learning mathematics and a consensus exists among researchers that technology can enhance the learning process. However, many factors need to be taken into consideration when introducing technology into teaching mathematics. Developing a more natural collaboration between learning technology experts, teachers, and students ensures all stakeholders are considered. Involving teachers early on helps develop enduring commitment to innovations and practical solutions. Moreover, creating a culture of collaboration between experts in the field and teachers brings to bear the best of what both worlds have to offer. This thesis synthesizes six papers and offers additional findings that focus on how technology experts can collaborate with elementary teachers to improve student learning outcomes. We focus on managing educational change in ways that improve the sustainability of innovations. We also explore how technical and teaching experts co-create effective lesson plans. In one of the six papers we collected and reported teachers’ responses to survey questions covering typical usage patterns on a platform. Teachers’ direct feedback was collected and incorporated to improve technical solutions. Moreover, one study was conducted abroad to measure the effect of culture on the teaching and learning process. Evidence of effectiveness of technologically enhanced lessons and corresponding homework was based on multiple studies in grades 1 - 3, covering 379 students. The effectiveness of educational technology was measured based on two variables: student performance in mathematics, based on the learning objectives specified in the curriculum, and arithmetic fluency measured by how rapidly and accurately students solved basic arithmetic operations. Statistically significant findings show that educational technology can improve two target variables when comparing students who did not use educational technology to students who did. An additional effect size analysis was conducted to verify and compare results with previous research. Based on these results, platform use produced the same or better effect than previous studies. Based on teacher feedback and user growth on the platform, we managed to integrate technology into the regular school classroom in meaningful and sustainable ways. We were clearly able to support teachers in their practice in a manner that resulted in noticeable student achievement gains. A survey revealed a need to emphasize new features that were introduced to the platform in teacher training programs. Teachers also reported having a positive attitude towards the platform and the initiative gained wide acceptance among their peers.Matematiikka on yksi tärkeimmistä kouluaineista pelkästään tuntimääräisesti mitattunakin. Matematiikan osaamista ja oppimista pidetään yleisesti tärkeänä ja arvostettuna taitona. Matematiikan oppimisesta on valtavasti tutkimusta ja tutkijoiden keskuudessa vallitsee yhteisymmärrys tietotekniikan positiivisista mahdollisuuksista edistää matematiikan oppimista. Tietotekniikan ja oppimisen vuorovaikutus on kuitenkin monisyinen vyyhti ja sen onnistunut hyödyntäminen vaatii tutkijoiden, opettajien ja oppilaiden välistä tiivistä ja vuorovaikutteista yhteistyötä. Uusien innovaatioiden ja kokeilujen onnistumiselle ja niihin sitoutumiselle luodaan vahva pohja, kun opettajat otetaan mukaan kehitystyöhön ensimetreiltä lähtien. Tällaisen tiiviin yhteistyökulttuurin vaaliminen mahdollistaa käytännön työn ja teorian vahvuuksien hyödyntämisen. Tämä väitöstyö koostuu kuudesta artikkelista. Artikkelit kuvaavat, kuinka tutkijat ja opettajat työskentelivät yhdessä parantaakseen oppilaiden matematiikan oppimista. Tavoitteenamme oli muuttaa koulun käytänteitä pitkäjänteisesti ja kestävällä tavalla. Tutkimme kuinka tutkijat ja opettajat pystyivät yhdessä luomaan onnistuneita ja tehokkaita oppimiskokonaisuuksia. Opettajat olivat koko ajan kehitystyön keskiössä. Yhdessä kuudesta artikkelista tutkittiin kyselytutkimuksen avulla opettajien kokemuksia ja käyttötottumuksia. Näitä vastauksia hyödynnettiin teknisessä kehitystyössä ja hyvien käytänteiden hiomisessa. Yksi väitöskirjan tutkimuksista tehtiin ulkomailla opetus- ja oppimiskulttuureista vaikutusten huomioimiseksi. Sähköisten oppituntien ja kotitehtävien vaikuttavuuden arviointi perustuu useisiin 1.-3. luokilla tehtyihin tutkimuksiin ja kaikkiaan 379 oppilaan vastauksiin. Sähköisten oppituntien vaikuttavuutta arvioitiin kahden eri mittarin perusteella. Ensin matematiikan taitojen perusteella, eli kuinka hyvin kunkin luokka-asteen oppimistavoitteet olivat täyttyneet ja myöhemmin myös laskusujuvuuden perusteella, eli kuinka nopeasti ja tarkasti oppilaat pystyivät laskemaan peruslaskutoimituksia. Tulokset osoittavat, että opetusteknologian avulla pystytään parantamaan oppilaiden suoriutumista edellä mainittujen osa-alueiden osalta verrattuna oppilaisiin, jotka eivät käyttäneet opetusteknologiaa. Tulokset olivat tilastollisesti merkitseviä. Näiden tulosten varmistamiseksi laskettiin vaikuttavuuden suuruus ja sitä verrattiin aiempiin alan tutkimuksiin. Tulosten perusteella sähköisillä oppitunneilla oli sama tai parempi vaikuttavuus kuin aiemmissa tutkimuksissa. Opettajien palautteiden ja kasvavan käyttäjämäärän perusteella voidaan sanoa, että onnistuimme tavoitteessamme integroida opetusteknologiaa mielekkäällä tavalla osaksi koulutyötä. Onnistuimme myös tukemaan ja auttamaan opettajia opetustyössään ja samalla merkittävästi parantamaan oppilaiden suoriutumista. Kyselytutkimuksen perusteella huomasimme, että uusien ominaisuuksien kouluttamiseen tulee kiinnittää enemmän huomiota. Samassa tutkimuksessa opettajat raportoivat olevansa tyytyväisiä alustaan ja sähköiset oppitunnit näyttävät saaneen vankan jalansijan suomalaisessa opettajakunnassa

    Algorithmic Techniques in Gene Expression Processing. From Imputation to Visualization

    Get PDF
    The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.Siirretty Doriast

    Energy-Efficient and Reliable Computing in Dark Silicon Era

    Get PDF
    Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability

    An Algebraic Approach to Nivat's Conjecture

    Get PDF
    This thesis introduces a new, algebraic method to study multidimensional configurations, also sometimes called words, which have low pattern complexity. This is the setting of several open problems, most notably Nivat’s conjecture, which is a generalization of Morse-Hedlund theorem to two dimensions, and the periodic tiling problem by Lagarias and Wang. We represent configurations as formal power series over d variables where d is the dimension. This allows us to study the ideal of polynomial annihilators of the series. In the two-dimensional case we give a detailed description of the ideal, which can be applied to obtain partial results on the aforementioned combinatorial problems. In particular, we show that configurations of low complexity can be decomposed into sums of periodic configurations. In the two-dimensional case, one such decomposition can be described in terms of the annihilator ideal. We apply this knowledge to obtain the main result of this thesis – an asymptotic version of Nivat’s conjecture. We also prove Nivat’s conjecture for configurations which are sums of two periodic ones, and as a corollary reprove the main result of Cyr and Kra from [CK15].Algebrallinen lähestymistapa Nivat’n konjektuuriin Tässä väitöskirjassa esitetään uusi, algebrallinen lähestymistapa moniulotteisiin,matalan kompleksisuuden konfiguraatioihin. Näistä konfiguraatioista, joita moniulotteisiksi sanoiksikin kutsutaan, on esitetty useita avoimia ongelmia. Tärkeimpinä näistä ovat Nivat’n konjektuuri, joka on Morsen-Hedlundin lauseen kaksiulotteinen yleistys, sekä Lagariaksen ja Wangin jaksollinen tiilitysongelma. Väitöskirjan lähestymistavassa d-ulotteiset konfiguraatiot esitetään d:n muuttujan formaaleina potenssisarjoina. Tämä mahdollistaa konfiguraation polynomiannihilaattoreiden ihanteen tutkimisen. Väitöskirjassa selvitetään kaksiulotteisessa tapauksessa ihanteen rakenne tarkasti. Tätä hyödyntämällä saadaan uusia, osittaisia tuloksia koskien edellä mainittuja kombinatorisia ongelmia. Tarkemmin sanottuna väitöskirjassa todistetaan, että matalan kompleksisuuden konfiguraatiot voidaan hajottaa jaksollisten konfiguraatioiden summaksi. Kaksiulotteisessa tapauksessa eräs tällainen hajotelma saadaan annihilaattori-ihanteesta. Tämän avulla todistetaan asymptoottinen versio Nivat’n konjektuurista. Lisäksi osoitetaan Nivat’n konjektuuri oikeaksi konfiguraatioille, jotka ovat kahden jaksollisen konfiguraation summia, ja tämän seurauksena saadaan uusi todistus Cyrin ja Kran artikkelin [CK15] päätulokselle

    Knowledge representation and text mining in biomedical, healthcare, and political domains

    Get PDF
    Knowledge representation and text mining can be employed to discover new knowledge and develop services by using the massive amounts of text gathered by modern information systems. The applied methods should take into account the domain-specific nature of knowledge. This thesis explores knowledge representation and text mining in three application domains. Biomolecular events can be described very precisely and concisely with appropriate representation schemes. Protein–protein interactions are commonly modelled in biological databases as binary relationships, whereas the complex relationships used in text mining are rich in information. The experimental results of this thesis show that complex relationships can be reduced to binary relationships and that it is possible to reconstruct complex relationships from mixtures of linguistically similar relationships. This encourages the extraction of complex relationships from the scientific literature even if binary relationships are required by the application at hand. The experimental results on cross-validation schemes for pair-input data help to understand how existing knowledge regarding dependent instances (such those concerning protein–protein pairs) can be leveraged to improve the generalisation performance estimates of learned models. Healthcare documents and news articles contain knowledge that is more difficult to model than biomolecular events and tend to have larger vocabularies than biomedical scientific articles. This thesis describes an ontology that models patient education documents and their content in order to improve the availability and quality of such documents. The experimental results of this thesis also show that the Recall-Oriented Understudy for Gisting Evaluation measures are a viable option for the automatic evaluation of textual patient record summarisation methods and that the area under the receiver operating characteristic curve can be used in a large-scale sentiment analysis. The sentiment analysis of Reuters news corpora suggests that the Western mainstream media portrays China negatively in politics-related articles but not in general, which provides new evidence to consider in the debate over the image of China in the Western media
    • …
    corecore