17,599 research outputs found

    An efficient method for solving fractional integral and differential equations of Bratu type

    Get PDF
    In this paper, the fractional integral and differential equations of Bratu type, which arise in many important physical phenomena, are investigated by an effective technique, Chebyshev Finite Difference Method with the help of fractional derivative in the concept of Caputo. The effect of the fractional derivative in the outcomes has great agreement with the nonlocality of the problem. The truncation and round off errors and convergence analyzes of the present method are also given. Numerical solutions of illustrative examples of the fractional integral and differential equations of Bratu type are given to highlight the validity and performance of the method. The results of the comparisons are very satisfied and show that the proposed technique is more effective and highly accurate than the other methods.Publisher's Versio

    Designing capital-ratio triggers for Contingent Convertibles

    Get PDF
    Contingent Convertible (CoCo) bonds represent a novel category of debt financial instruments, recently introduced into the financial landscape. Their primary role is to bolster financial stability by maintaining healthy capital levels for the issuing entity. This is achieved by converting the bond principal into equity or writing it down once the minimum capital ratios are violated. CoCos aim to recapitalize the bank before it is on the brink of collapse, to avoid a state bailout at a huge cost to the taxpayer. Under normal circumstances, CoCo bonds operate as ordinary coupon-paying bonds, which only in case of insufficient capital ratios are converted into equity of the issuer. However, the CoCo market has struggled to expand over the years, and the recent tumult involving Credit Suisse and its enforced CoCo write-off has underscored these challenges. The focus of this research work is on the first hand to understand the reasons for this failure, and, on the other hand, to modify its underlying design in order to restore its intended purpose: to act as a liquidity buffer, strengthening the capital structure of the issuing firm. The cornerstone of the proposed work is the design of a self-adaptive model for leverage. This model features an automatic conversion that does not hinge on the judgment of regulatory authorities. Notably, it allows the issuer's debt-to-assets ratio to remain within predetermined boundaries, where the likelihood of default on outstanding liabilities remains minimal. The pricing of the proposed instruments is difficult as the conversion is dynamic. We view CoCos essentially as a portfolio of different financial instruments. This treatment makes it easier to analyze their response to different market events that may or may not trigger their conversion to equity. We provide evidence of the model's effectiveness and discuss it implications of its implementation, in light of the regulatory environment and best market practices.Skilyrt breytanleg (e. Contingent Convertible, skammstafað CoCo) skuldabréf eru nýstárleg gerð af fjármálagerningum sem nýlega komu fram á sjónarsvið fjármálamarkaða. Helsta hlutverk þeirra er að e a fjármálastöðugleika með því að viðhalda hæfilegum eiginfjárgrunni fyrir útgefendur þeirra. Þetta er gert með því að umbreyta höfuðstól skuldabréfs í hlutafé eða með því færa þau niður þegar krafa um eiginfjárhlutföll eru rofin. CoCo hefur það markmið að endurfjármagna bankann áður en hann fellur og þar með koma í veg fyrir björgunaraðgerðir af hálfu ríkisins, sem hefur í för með sér mikinn kostnað fyrir skattgreiðendur. Undir venjulegum kringumstæðum virka CoCo skuldabréf eins og hefðbundin arðgreiðslu- skuldabréf, sem einungis er breytt í hlutafé þegar eiginfjárhlutföll útgefanda þeirra eru ekki nægjanleg. Eigi að síður hefur markaður fyrir CoCo átt erfitt uppdráttar í gegnum tíðina og hefur nýlegur titringur í kringum Credit Suisse og þvingaðar afskriftir þeirra á CoCo skuldabréfum ýtt enn frekar undir erfiðleikana. Helsti tilgangur þessarar rannsóknar er tvíþættur. Annars vegar er ætlunin að skilja hvers vegna CoCo hefur ekki átt meiri velgengni að fagna en raun ber vitni. Hins vegar er henni ætlað að breyta grundvallarhönnun CoCo í þeim tilgangi að endurheimta upprunalegan tilgang þeirra: sem er að vera stuðpúði lausafés sem styrkir fjármagnsskipan útgáfu fyrirtækisins. Hornsteinn verkefnisins er hönnun á líkani með sjálfaðlögunarhæfni með tilliti til skuldsetningarhlutfalls. Líkanið býr yfir sjálfvirkri umbreytingu sem ræðst því ekki af reglum eftirlitsyfirvalda. Það gerir útgefanda því kleift að viðhalda hlutfalli skulda á móti eignum innan fyrirfram skilgreindra marka, þar sem líkur á vanskilum vegna útistandandi skuldbindinga haldast í lágmarki. Verðlagning gerninganna sem lagðir eru til í rannsókninni er þó vandasöm þar sem umbreytingin er dýnamísk. Í meginatriðum verður litið á CoCos sem safn ólíkra fjármálagerninga. Með þessari aðferð er hægt að greina viðbrögð þeirra við mismunandi markaðsatburðum sem geta mögulega hrint af stað umbreytingu yfir í hlutafé. Sýnt verður fram á skilvirkni líkansins ásamt því að álykta um innleiðingu þess með tilliti til regluverks og bestu markaðsvenja.RU Research Fund Icelandic Research Fun

    Model Reduction for the Kuramoto-Sakaguchi Model: analyzing the effect of non-entrained rogue oscillators

    Get PDF
    The Kuramoto-Sakaguchi model is a paradigmatic model of coupled oscillator system which displays collective behaviour. This thesis is concerned with better understanding of the model through construction of lowerdimensional reduced models that are more tractable for analysis. The role of non-entrained rogue oscillators on the synchronized oscillators is highlighted. After reviewing traditional analysis via mean-field theory in the thermodynamic limit of infinitely many oscillators, we proceed to construct reduced models for finite-size systems, where we investigate on how the effects of rogue oscillators should be incorporated. We first describe the rogue oscillators’ effect via averaging, leading to a closed deterministic system that involves the synchronized oscillators only. We perform model reduction analysis on the system via the collective coordinate framework. It is demonstrated that inclusion of the effect of rogue oscillators is crucial for obtaining an accurate description of the system. A new non-linear ansatz is introduced which significantly improves the accuracy of the reduced system, both for finite-size systems and in the thermodynamic limit. We then analyze the fluctuation of rogue oscillator’s effect around their mean, by constructing stochastic process approximations. It is demonstrated that utilizing an Ornstein- Uhlenbeck process leads to stochastic reduced model that can capture the fluctuations exhibited in the full model. This thesis also adds to the mean-field theory analysis for Kuramoto-like model by performing meanfield analysis on the Kuramoto-Sakaguchi model with uniform intrinsic frequency distribution, which reviews that for a non-zero phase-offset parameter, the system exhibits an intricate transition to synchronization, with first-order transition to partial synchronization followed by a second-order transition to global synchronization

    Exact steady states of minimal models of nonequilibrium statistical mechanics

    Get PDF
    Systems out of equilibrium with their environment are ubiquitous in nature. Of particular relevance to biological applications are models in which each microscopic component spontaneously generates its own motion. Known collectively as active matter, such models are natural effective descriptions of many biological systems, from subcellular motors to flocks of birds. One would like to understand such phenomena using the tools of statistical mechanics, yet the inherent nonequilibrium setting means that the most powerful classical results of that field cannot be applied. This circumstance has fuelled interest in exactly solvable models of active matter. The aim in studying such models is twofold. Firstly, as exactly solvable model are often minimal, it makes them good candidates as generic coarse-grained descriptions of real-world processes. Secondly, even if the model in question does not correspond directly to some situation realizable in experiment, its exact solution may suggest some general principles, which could also apply to more complex phenomena. A typical tool to investigate the properties of a large system is to study the behaviour of a probe particle placed in such an environment. In this context, cases of interest are both an active particle in a passive environment or an active particle in an active environment. One model that has attracted much attention in this regard is the asymmetric simple exclusion process (ASEP), which is a prototypical minimal model of driven diffusive transport. In this thesis, I consider two variations of the ASEP on a ring geometry. The first is a system of symmetrically diffusing particles with one totally asymmetric (driven) defect particle. The second is a system of partially asymmetric particles, with one defect that may overtake the other particles. I analyze the steady states of these systems using two exact methods: the matrix product ansatz, and, for the second model the Bethe ansatz. This allows me to derive the exact density profiles and mean currents for these models, and, for the second model, the diffusion constant. Moreover, I use the Yang-Baxter formalism to study the general class of two-species partially asymmetric processes with overtaking. This allows me to determine conditions under which such models can be solved using the Bethe ansatz

    On the well posedness of a mathematical model for a singular nonlinear fractional pseudo-hyperbolic system with nonlocal boundary conditions and frictional damping terms

    Get PDF
    This paper is devoted to the study of the well-posedness of a singular nonlinear fractional pseudo-hyperbolic system with frictional damping terms. The fractional derivative is described in Caputo sense. The equations are supplemented by classical and nonlocal boundary conditions. Upon some a priori estimates and density arguments, we establish the existence and uniqueness of the strongly generalized solution for the associated linear fractional system in some Sobolev fractional spaces. On the basis of the obtained results for the linear fractional system, we apply an iterative process in order to establish the well-posedness of the nonlinear fractional system. This mathematical model of pseudo-hyperbolic systems arises mainly in the theory of longitudinal and lateral vibrations of elastic bars (beams), and in some special case it is propounded in unsteady helical flows between two infinite coaxial circular cylinders for some specific boundary conditions

    New techniques for integrable spin chains and their application to gauge theories

    Get PDF
    In this thesis we study integrable systems known as spin chains and their applications to the study of the AdS/CFT duality, and in particular to N “ 4 supersymmetric Yang-Mills theory (SYM) in four dimensions.First, we introduce the necessary tools for the study of integrable periodic spin chains, which are based on algebraic and functional relations. From these tools, we derive in detail a technique that can be used to compute all the observables in these spin chains, known as Functional Separation of Variables. Then, we generalise our methods and results to a class of integrable spin chains with more general boundary conditions, known as open integrable spin chains.In the second part, we study a cusped Maldacena-Wilson line in N “ 4 SYM with insertions of scalar fields at the cusp, in a simplifying limit called the ladders limit. We derive a rigorous duality between this observable and an open integrable spin chain, the open Fishchain. We solve the Baxter TQ relation for the spin chain to obtain the exact spectrum of scaling dimensions of this observable involving cusped Maldacena-Wilson line.The open Fishchain and the application of Functional Separation of Variables to it form a very promising road for the study of the three-point functions of non-local operators in N “ 4 SYM via integrability

    A failure in decryption process for bivariate polynomial reconstruction problem cryptosystem

    Get PDF
    In 1999, the Polynomial Reconstruction Problem (PRP) was put forward as a new hard mathematics problem. A univariate PRP scheme by Augot and Finiasz was introduced at Eurocrypt in 2003, and this cryptosystem was fully cryptanalyzed in 2004. In 2013, a bivariate PRP cryptosystem was developed, which is a modified version of Augot and Finiasz's original work. This study describes a decryption failure that can occur in both cryptosystems. We demonstrate that when the error has a weight greater than the number of monomials in a secret polynomial, p, decryption failure can occur. The result of this study also determines the upper bound that should be applied to avoid decryption failure

    Complete and easy type Inference for first-class polymorphism

    Get PDF
    The Hindley-Milner (HM) typing discipline is remarkable in that it allows statically typing programs without requiring the programmer to annotate programs with types themselves. This is due to the HM system offering complete type inference, meaning that if a program is well typed, the inference algorithm is able to determine all the necessary typing information. Let bindings implicitly perform generalisation, allowing a let-bound variable to receive the most general possible type, which in turn may be instantiated appropriately at each of the variable’s use sites. As a result, the HM type system has since become the foundation for type inference in programming languages such as Haskell as well as the ML family of languages and has been extended in a multitude of ways. The original HM system only supports prenex polymorphism, where type variables are universally quantified only at the outermost level. This precludes many useful programs, such as passing a data structure to a function in the form of a fold function, which would need to be polymorphic in the type of the accumulator. However, this would require a nested quantifier in the type of the overall function. As a result, one direction of extending the HM system is to add support for first-class polymorphism, allowing arbitrarily nested quantifiers and instantiating type variables with polymorphic types. In such systems, restrictions are necessary to retain decidability of type inference. This work presents FreezeML, a novel approach for integrating first-class polymorphism into the HM system, focused on simplicity. It eschews sophisticated yet hard to grasp heuristics in the type systems or extending the language of types, while still requiring only modest amounts of annotations. In particular, FreezeML leverages the mechanisms for generalisation and instantiation that are already at the heart of ML. Generalisation and instantiation are performed by let bindings and variables, respectively, but extended to types beyond prenex polymorphism. The defining feature of FreezeML is the ability to freeze variables, which prevents the usual instantiation of their types, allowing them instead to keep their original, fully polymorphic types. We demonstrate that FreezeML is as expressive as System F by providing a translation from the latter to the former; the reverse direction is also shown. Further, we prove that FreezeML is indeed a conservative extension of ML: When considering only ML programs, FreezeML accepts exactly the same programs as ML itself. # We show that type inference for FreezeML can easily be integrated into HM-like type systems by presenting a sound and complete inference algorithm for FreezeML that extends Algorithm W, the original inference algorithm for the HM system. Since the inception of Algorithm W in the 1970s, type inference for the HM system and its descendants has been modernised by approaches that involve constraint solving, which proved to be more modular and extensible. In such systems, a term is translated to a logical constraint, whose solutions correspond to the types of the original term. A solver for such constraints may then be defined independently. To this end, we demonstrate such a constraint-based inference approach for FreezeML. We also discuss the effects of integrating the value restriction into FreezeML and provide detailed comparisons with other approaches towards first-class polymorphism in ML alongside a collection of examples found in the literature

    Time-vs. frequency-domain inverse elastic scattering: Theory and experiment

    Get PDF
    International audienceThis study formally adapts the time-domain linear sampling method (TLSM) for ultrasonic imaging of stationary and evolving fractures in safety-critical components. The TLSM indicator is then applied to the laboratory test data of [22, 18] and the obtained reconstructions are compared to their frequency-domain counterparts. The results highlight the unique capability of the time-domain imaging functional for high-fidelity tracking of evolving damage, and its relative robustness to sparse and reduced-aperture data at moderate noise levels. A comparative analysis of the TLSM images against the multifrequency LSM maps of [22] further reveals that thanks to the full-waveform inversion in time and space, the TLSM generates images of remarkably higher quality with the same dataset

    Counterexample Guided Abstraction Refinement with Non-Refined Abstractions for Multi-Agent Path Finding

    Full text link
    Counterexample guided abstraction refinement (CEGAR) represents a powerful symbolic technique for various tasks such as model checking and reachability analysis. Recently, CEGAR combined with Boolean satisfiability (SAT) has been applied for multi-agent path finding (MAPF), a problem where the task is to navigate agents from their start positions to given individual goal positions so that the agents do not collide with each other. The recent CEGAR approach used the initial abstraction of the MAPF problem where collisions between agents were omitted and were eliminated in subsequent abstraction refinements. We propose in this work a novel CEGAR-style solver for MAPF based on SAT in which some abstractions are deliberately left non-refined. This adds the necessity to post-process the answers obtained from the underlying SAT solver as these answers slightly differ from the correct MAPF solutions. Non-refining however yields order-of-magnitude smaller SAT encodings than those of the previous approach and speeds up the overall solving process making the SAT-based solver for MAPF competitive again in relevant benchmarks
    corecore