6,940 research outputs found

    Designing capital-ratio triggers for Contingent Convertibles

    Get PDF
    Contingent Convertible (CoCo) bonds represent a novel category of debt financial instruments, recently introduced into the financial landscape. Their primary role is to bolster financial stability by maintaining healthy capital levels for the issuing entity. This is achieved by converting the bond principal into equity or writing it down once the minimum capital ratios are violated. CoCos aim to recapitalize the bank before it is on the brink of collapse, to avoid a state bailout at a huge cost to the taxpayer. Under normal circumstances, CoCo bonds operate as ordinary coupon-paying bonds, which only in case of insufficient capital ratios are converted into equity of the issuer. However, the CoCo market has struggled to expand over the years, and the recent tumult involving Credit Suisse and its enforced CoCo write-off has underscored these challenges. The focus of this research work is on the first hand to understand the reasons for this failure, and, on the other hand, to modify its underlying design in order to restore its intended purpose: to act as a liquidity buffer, strengthening the capital structure of the issuing firm. The cornerstone of the proposed work is the design of a self-adaptive model for leverage. This model features an automatic conversion that does not hinge on the judgment of regulatory authorities. Notably, it allows the issuer's debt-to-assets ratio to remain within predetermined boundaries, where the likelihood of default on outstanding liabilities remains minimal. The pricing of the proposed instruments is difficult as the conversion is dynamic. We view CoCos essentially as a portfolio of different financial instruments. This treatment makes it easier to analyze their response to different market events that may or may not trigger their conversion to equity. We provide evidence of the model's effectiveness and discuss it implications of its implementation, in light of the regulatory environment and best market practices.Skilyrt breytanleg (e. Contingent Convertible, skammstafað CoCo) skuldabréf eru nýstárleg gerð af fjármálagerningum sem nýlega komu fram á sjónarsvið fjármálamarkaða. Helsta hlutverk þeirra er að e a fjármálastöðugleika með því að viðhalda hæfilegum eiginfjárgrunni fyrir útgefendur þeirra. Þetta er gert með því að umbreyta höfuðstól skuldabréfs í hlutafé eða með því færa þau niður þegar krafa um eiginfjárhlutföll eru rofin. CoCo hefur það markmið að endurfjármagna bankann áður en hann fellur og þar með koma í veg fyrir björgunaraðgerðir af hálfu ríkisins, sem hefur í för með sér mikinn kostnað fyrir skattgreiðendur. Undir venjulegum kringumstæðum virka CoCo skuldabréf eins og hefðbundin arðgreiðslu- skuldabréf, sem einungis er breytt í hlutafé þegar eiginfjárhlutföll útgefanda þeirra eru ekki nægjanleg. Eigi að síður hefur markaður fyrir CoCo átt erfitt uppdráttar í gegnum tíðina og hefur nýlegur titringur í kringum Credit Suisse og þvingaðar afskriftir þeirra á CoCo skuldabréfum ýtt enn frekar undir erfiðleikana. Helsti tilgangur þessarar rannsóknar er tvíþættur. Annars vegar er ætlunin að skilja hvers vegna CoCo hefur ekki átt meiri velgengni að fagna en raun ber vitni. Hins vegar er henni ætlað að breyta grundvallarhönnun CoCo í þeim tilgangi að endurheimta upprunalegan tilgang þeirra: sem er að vera stuðpúði lausafés sem styrkir fjármagnsskipan útgáfu fyrirtækisins. Hornsteinn verkefnisins er hönnun á líkani með sjálfaðlögunarhæfni með tilliti til skuldsetningarhlutfalls. Líkanið býr yfir sjálfvirkri umbreytingu sem ræðst því ekki af reglum eftirlitsyfirvalda. Það gerir útgefanda því kleift að viðhalda hlutfalli skulda á móti eignum innan fyrirfram skilgreindra marka, þar sem líkur á vanskilum vegna útistandandi skuldbindinga haldast í lágmarki. Verðlagning gerninganna sem lagðir eru til í rannsókninni er þó vandasöm þar sem umbreytingin er dýnamísk. Í meginatriðum verður litið á CoCos sem safn ólíkra fjármálagerninga. Með þessari aðferð er hægt að greina viðbrögð þeirra við mismunandi markaðsatburðum sem geta mögulega hrint af stað umbreytingu yfir í hlutafé. Sýnt verður fram á skilvirkni líkansins ásamt því að álykta um innleiðingu þess með tilliti til regluverks og bestu markaðsvenja.RU Research Fund Icelandic Research Fun

    Physiological responses and cognitive behaviours: Measures of heart rate variability index language knowledge

    Get PDF
    Over the past decades, focus has been on developing methods that allow tapping into aspects of cognition that are not directly observable. This includes linguistic knowledge and skills which develop largely without awareness and may therefore be difficult or impossible to articulate. Building on the relation between language cognition and the nervous system, we examine whether Heart Rate Variability (HRV), a cardiovascular measure that indexes Autonomic Nervous System activity, can be used to assess implicit language knowledge. We test the potential of HRV to detect whether individuals possess grammatical knowledge and explore how sensitive the cardiovascular response is. 41 healthy, British English-speaking adults listened to 40 English speech samples, half of which contained grammatical errors. Thought Technology's 5-channel ProComp 5 encoder tracked heart rate via a BVP-Flex/Pro sensor attached to the middle finger of the non-dominant hand, at a rate of 2048 samples per second. A Generalised Additive Mixed Effects Model confirmed a cardiovascular response to grammatical violations: there is a statistically significant reduction in HRV as indexed by NN50 in response to stimuli that contain errors. The cardiovascular response reflects the extent of the linguistic violations, and NN50 decreases linearly with an increase in the number of errors, up to a certain level, after which HRV remains constant. This observation brings into focus a new dimension of the intricate relationship between physiology and cognition. Being able to use a highly portable and non-intrusive technique with language stimuli also creates exciting possibilities for assessing the language knowledge of individuals from a range of populations in their natural environment and in authentic communicative situations

    Brittle-viscous deformation cycles at the base of the seismogenic zone in the continental crust

    Get PDF
    The main goal of the study was to determine the dynamical cycle of ductile-brittle deformation and to characterise the fluid pathways at different scales of a brittle-viscous fault zone active at the base of the seismogenic crust. Object of analysis are samples from the sinistral strike-slip fault zone BFZ045 from Olkiluoto (SW Finland), located at the site of a deep geological repository for nuclear waste. Combined microstructural analysis, electron backscatter diffraction (EBSD), and mineral chemistry were applied to reconstruct the variations in pressure, temperature, fluid pressure, and differential stress that mediated deformation and strain localization along BFZ045 across the BDTZ. Ductile deformation took place at 400-500° C and 3-4 kbar, and recrystallized grain size piezometry for quartz document a progressive increase in differential stress during mylonitization, from ca. 50 MPa to ca. 120 MPa. The increase in differential stress was localised towards the shear zone center, which was eventually overprinted by brittle deformation in a narrowing shear zone. Cataclastic deformation occurred under lower T conditions down to T ≥ 320° C and was not further overprinted by mylonitic creep. Porosity estimates were obtained through the combination of x-ray micro-computed tomography (µCT), mercury intrusion porosimetry, He pycnometry, and microstructural analysis. Low porosity values (0.8-4.4%) for different rock type, 2-20 µm pore size, representative of pore connectivity, and microstructural observation suggest a relationship to a dynamical cycle of fracturing and sealing mechanism, mostly controlled by ductile deformation. Similarly, the observation from fracture orientation analysis indicates that the mylonitic precursor of BFZ045 played an important role in the localization of the brittle deformation. This thesis highlights that the ductile-brittle deformation cycle in BFZ045 was controlled by transient oscillations in fluid pressure in a narrowing shear zone deforming at progressively higher differential stress during cooling

    Information actors beyond modernity and coloniality in times of climate change:A comparative design ethnography on the making of monitors for sustainable futures in Curaçao and Amsterdam, between 2019-2022

    Get PDF
    In his dissertation, Mr. Goilo developed a cutting-edge theoretical framework for an Anthropology of Information. This study compares information in the context of modernity in Amsterdam and coloniality in Curaçao through the making process of monitors and develops five ways to understand how information can act towards sustainable futures. The research also discusses how the two contexts, that is modernity and coloniality, have been in informational symbiosis for centuries which is producing negative informational side effects within the age of the Anthropocene. By exploring the modernity-coloniality symbiosis of information, the author explains how scholars, policymakers, and data-analysts can act through historical and structural roots of contemporary global inequities related to the production and distribution of information. Ultimately, the five theses propose conditions towards the collective production of knowledge towards a more sustainable planet

    Influence of Preaching’s Rhetorical Appeal on Evangelical Listeners’ Motivation

    Get PDF
    Preaching is a form of rhetorical narratology aimed at persuading its audience via sermons to experience a renewal of the mind and the transformation of their life. While previous research established the fact that listeners comprehend sermons through their rhetorical appeal, it has been unclear how this has motivated evangelical listeners to act. The purpose of this qualitative narrative study was to explore how the rhetorical appeal of preaching influences evangelical listeners’ motivation at evangelical churches in Savannah, Georgia. A comprehensive approach to exploring a sermon’s rhetorical appeal was utilized by focusing jointly on individual perception and social context. The Narrative Transportation Theory served as the theoretical framework, and 34 participants from six churches were interviewed to reach saturation. The findings showed that the rhetorical appeal embedded in preaching, plus its narrative essence, influences evangelical listener motivation. In addition, listeners subconsciously understand that aspects of rhetoric and narrative work together in sermons to influence their motivation. This study specifically identified three themes, seven categories, 13 conditions, and 32 codes relevant for rhetorical appeal to be effective and to help motivation occur. The three themes of Relatability, Applicability, and Engagement were aligned with Ethos, Logos, and Pathos, and then integrated with Environmental, Cognitive, and Behavioral functions, to create the Sermon Listener Motivation Triangle. This study’s corroboration of preaching’s collaborative nature between the perfectly divine and the imperfectly human is shared in hopes of helping speakers prepare scripturally authentic sermons and communicate in engaging ways that inspire change

    Spatial epidemiology of a highly transmissible disease in urban neighbourhoods: Using COVID-19 outbreaks in Toronto as a case study

    Get PDF
    The emergence of infectious diseases in an urban area involves a complex interaction between the socioecological processes in the neighbourhood and urbanization. As a result, such an urban environment can be the incubator of new epidemics and spread diseases more rapidly in densely populated areas than elsewhere. Most recently, the Coronavirus-19 (COVID-19) pandemic has brought unprecedented challenges around the world. Toronto, the capital city of Ontario, Canada, has been severely impacted by COVID-19. Understanding the spatiotemporal patterns and the key drivers of such patterns is imperative for designing and implementing an effective public health program to control the spread of the pandemic. This dissertation was designed to contribute to the global research effort on the COVID-19 pandemic by conducting spatial epidemiological studies to enhance our understanding of the disease's epidemiology in a spatial context to guide enhancing the public health strategies in controlling the disease. Comprised of three original research manuscripts, this dissertation focuses on the spatial epidemiology of COVID-19 at a neighbourhood scale in Toronto. Each manuscript makes scientific contributions and enhances our knowledge of how interactions between different socioecological processes in the neighbourhood and urbanization can influence spatial spread and patterns of COVID-19 in Toronto with the application of novel and advanced methodological approaches. The findings of the outcomes of the analyses are intended to contribute to the public health policy that informs neighbourhood-based disease intervention initiatives by the public health authorities, local government, and policymakers. The first manuscript analyzes the globally and locally variable socioeconomic drivers of COVID-19 incidence and examines how these relationships vary across different neighbourhoods. In the global model, lower levels of education and the percentage of immigrants were found to have a positive association with increased risk for COVID-19. This study provides the methodological framework for identifying the local variations in the association between risk for COVID-19 and socioeconomic factors in an urban environment by applying a local multiscale geographically weighted regression (MGWR) modelling approach. The MGWR model is an improvement over the methods used in earlier studies of COVID-19 in identifying local variations of COVID-19 by incorporating a correction factor for the multiple testing problem in the geographically weighted regression models. The second manuscript quantifies the associations between COVID-19 cases and urban socioeconomic and land surface temperature (LST) at the neighbourhood scale in Toronto. Four spatiotemporal Bayesian hierarchical models with spatial, temporal, and varying space-time interaction terms are compared. The results of this study identified the seasonal trends of COVID-19 risk, where the spatiotemporal trends show increasing, decreasing, or stable patterns, and identified area-specific spatial risk for targeted interventions. Educational level and high land surface temperature are shown to have a positive association with the risk for COVID-19. In this study, high spatial and temporal resolution satellite images were used to extract LST, and atmospheric corrections methods were applied to these images by adopting a land surface emissivity (LSE) model, which provided a high estimation accuracy. The methodological approach of this work will help researchers understand how to acquire long time-series data of LST at a spatial scale from satellite images, develop methodological approaches for atmospheric correction and create the environmental data with a high estimation accuracy to fit into modelling disease. Applying to policy, the findings of this study can inform the design and implementation of urban planning strategies and programs to control disease risks. The third manuscript developed a novel approach for visualization of the spread of infectious disease outbreaks by incorporating neighbourhood networks and the time-series data of the disease at the neighbourhood level. The findings of the model provide an understanding of the direction and magnitude of spatial risk for the outbreak and guide for the importance of early intervention in order to stop the spread of the outbreak. The manuscript also identified hotspots using incidence rate and disease persistence, the findings of which may inform public health planners to develop priority-based intervention plans in a resource constraint situation

    Spike timing reshapes robustness against attacks in spiking neural networks

    Full text link
    The success of deep learning in the past decade is partially shrouded in the shadow of adversarial attacks. In contrast, the brain is far more robust at complex cognitive tasks. Utilizing the advantage that neurons in the brain communicate via spikes, spiking neural networks (SNNs) are emerging as a new type of neural network model, boosting the frontier of theoretical investigation and empirical application of artificial neural networks and deep learning. Neuroscience research proposes that the precise timing of neural spikes plays an important role in the information coding and sensory processing of the biological brain. However, the role of spike timing in SNNs is less considered and far from understood. Here we systematically explored the timing mechanism of spike coding in SNNs, focusing on the robustness of the system against various types of attacks. We found that SNNs can achieve higher robustness improvement using the coding principle of precise spike timing in neural encoding and decoding, facilitated by different learning rules. Our results suggest that the utility of spike timing coding in SNNs could improve the robustness against attacks, providing a new approach to reliable coding principles for developing next-generation brain-inspired deep learning

    Revisiting the capitalization of public transport accessibility into residential land value: an empirical analysis drawing on Open Science

    Get PDF
    Background: The delivery and effective operation of public transport is fundamental for a for a transition to low-carbon emission transport systems’. However, many cities face budgetary challenges in providing and operating this type of infrastructure. Land value capture (LVC) instruments, aimed at recovering all or part of the land value uplifts triggered by actions other than the landowner, can alleviate some of this pressure. A key element of LVC lies in the increment in land value associated with a particular public action. Urban economic theory supports this idea and considers accessibility to be a core element for determining residential land value. Although the empirical literature assessing the relationship between land value increments and public transport infrastructure is vast, it often assumes homogeneous benefits and, therefore, overlooks relevant elements of accessibility. Advancements in the accessibility concept in the context of Open Science can ease the relaxation of such assumptions. Methods: This thesis draws on the case of Greater Mexico City between 2009 and 2019. It focuses on the effects of the main public transport network (MPTN) which is organised in seven temporal stages according to its expansion phases. The analysis incorporates location based accessibility measures to employment opportunities in order to assess the benefits of public transport infrastructure. It does so by making extensive use of the open-source software OpenTripPlanner for public transport route modelling (≈ 2.1 billion origin-destination routes). Potential capitalizations are assessed according to the hedonic framework. The property value data includes individual administrative mortgage records collected by the Federal Mortgage Society (≈ 800,000). The hedonic function is estimated using a variety of approaches, i.e. linear models, nonlinear models, multilevel models, and spatial multilevel models. These are estimated by the maximum likelihood and Bayesian methods. The study also examines possible spatial aggregation bias using alternative spatial aggregation schemes according to the modifiable areal unit problem (MAUP) literature. Results: The accessibility models across the various temporal stages evidence the spatial heterogeneity shaped by the MPTN in combination with land use and the individual perception of residents. This highlights the need to transition from measures that focus on the characteristics of transport infrastructure to comprehensive accessibility measures which reflect such heterogeneity. The estimated hedonic function suggests a robust, positive, and significant relationship between MPTN accessibility and residential land value in all the modelling frameworks in the presence of a variety of controls. The residential land value increases between 3.6% and 5.7% for one additional standard deviation in MPTN accessibility to employment in the final set of models. The total willingness to pay (TWTP) is considerable, ranging from 0.7 to 1.5 times the equivalent of the capital costs of the bus rapid transit Line-7 of the Metrobús system. A sensitivity analysis shows that the hedonic model estimation is sensitive to the MAUP. In addition, the use of a post code zoning scheme produces the closest results compared to the smallest spatial analytical scheme (0.5 km hexagonal grid). Conclusion: The present thesis advances the discussion on the capitalization of public transport on residential land value by adopting recent contributions from the Open Science framework. Empirically, it fills a knowledge gap given the lack of literature around this topic in this area of study. In terms of policy, the findings support LVC as a mechanism of considerable potential. Regarding fee-based LVC instruments, there are fairness issues in relation to the distribution of charges or exactions to households that could be addressed using location based measures. Furthermore, the approach developed for this analysis serves as valuable guidance for identifying sites with large potential for the implementation of development based instruments, for instance land readjustments or the sale/lease of additional development rights

    Depth Estimation and Image Restoration by Deep Learning from Defocused Images

    Full text link
    Monocular depth estimation and image deblurring are two fundamental tasks in computer vision, given their crucial role in understanding 3D scenes. Performing any of them by relying on a single image is an ill-posed problem. The recent advances in the field of Deep Convolutional Neural Networks (DNNs) have revolutionized many tasks in computer vision, including depth estimation and image deblurring. When it comes to using defocused images, the depth estimation and the recovery of the All-in-Focus (Aif) image become related problems due to defocus physics. Despite this, most of the existing models treat them separately. There are, however, recent models that solve these problems simultaneously by concatenating two networks in a sequence to first estimate the depth or defocus map and then reconstruct the focused image based on it. We propose a DNN that solves the depth estimation and image deblurring in parallel. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET) extends a conventional Depth from Defocus (DFD) networks with a deblurring branch that shares the same encoder as the depth branch. The proposed method has been successfully tested on two benchmarks, one for indoor and the other for outdoor scenes: NYU-v2 and Make3D. Extensive experiments with 2HDED:NET on these benchmarks have demonstrated superior or close performances to those of the state-of-the-art models for depth estimation and image deblurring

    Automated identification and behaviour classification for modelling social dynamics in group-housed mice

    Get PDF
    Mice are often used in biology as exploratory models of human conditions, due to their similar genetics and physiology. Unfortunately, research on behaviour has traditionally been limited to studying individuals in isolated environments and over short periods of time. This can miss critical time-effects, and, since mice are social creatures, bias results. This work addresses this gap in research by developing tools to analyse the individual behaviour of group-housed mice in the home-cage over several days and with minimal disruption. Using data provided by the Mary Lyon Centre at MRC Harwell we designed an end-to-end system that (a) tracks and identifies mice in a cage, (b) infers their behaviour, and subsequently (c) models the group dynamics as functions of individual activities. In support of the above, we also curated and made available a large dataset of mouse localisation and behaviour classifications (IMADGE), as well as two smaller annotated datasets for training/evaluating the identification (TIDe) and behaviour inference (ABODe) systems. This research constitutes the first of its kind in terms of the scale and challenges addressed. The data source (side-view single-channel video with clutter and no identification markers for mice) presents challenging conditions for analysis, but has the potential to give richer information while using industry standard housing. A Tracking and Identification module was developed to automatically detect, track and identify the (visually similar) mice in the cluttered home-cage using only single-channel IR video and coarse position from RFID readings. Existing detectors and trackers were combined with a novel Integer Linear Programming formulation to assign anonymous tracks to mouse identities. This utilised a probabilistic weight model of affinity between detections and RFID pickups. The next task necessitated the implementation of the Activity Labelling module that classifies the behaviour of each mouse, handling occlusion to avoid giving unreliable classifications when the mice cannot be observed. Two key aspects of this were (a) careful feature-selection, and (b) judicious balancing of the errors of the system in line with the repercussions for our setup. Given these sequences of individual behaviours, we analysed the interaction dynamics between mice in the same cage by collapsing the group behaviour into a sequence of interpretable latent regimes using both static and temporal (Markov) models. Using a permutation matrix, we were able to automatically assign mice to roles in the HMM, fit a global model to a group of cages and analyse abnormalities in data from a different demographic
    corecore