56 research outputs found

    Generation and properties of random graphs and analysis of randomized algorithms

    Get PDF
    We study a new method of generating random dd-regular graphs by repeatedly applying an operation called pegging. The pegging algorithm, which applies the pegging operation in each step, is a method of generating large random regular graphs beginning with small ones. We prove that the limiting joint distribution of the numbers of short cycles in the resulting graph is independent Poisson. We use the coupling method to bound the total variation distance between the joint distribution of short cycle counts and its limit and thereby show that O(ϔ−1)O(\epsilon^{-1}) is an upper bound of the \eps-mixing time. The coupling involves two different, though quite similar, Markov chains that are not time-homogeneous. We also show that the Ï”\epsilon-mixing time is not o(ϔ−1)o(\epsilon^{-1}). This demonstrates that the upper bound is essentially tight. We study also the connectivity of random dd-regular graphs generated by the pegging algorithm. We show that these graphs are asymptotically almost surely dd-connected for any even constant d≄4d\ge 4. The problem of orientation of random hypergraphs is motivated by the classical load balancing problem. Let h>w>0h>w>0 be two fixed integers. Let \orH be a hypergraph whose hyperedges are uniformly of size hh. To {\em ww-orient} a hyperedge, we assign exactly ww of its vertices positive signs with respect to this hyperedge, and the rest negative. A (w,k)(w,k)-orientation of \orH consists of a ww-orientation of all hyperedges of \orH, such that each vertex receives at most kk positive signs from its incident hyperedges. When kk is large enough, we determine the threshold of the existence of a (w,k)(w,k)-orientation of a random hypergraph. The (w,k)(w,k)-orientation of hypergraphs is strongly related to a general version of the off-line load balancing problem. The other topic we discuss is computing the probability of induced subgraphs in a random regular graph. Let 0<s<n0<s<n and HH be a graph on ss vertices. For any S⊂[n]S\subset [n] with ∣S∣=s|S|=s, we compute the probability that the subgraph of Gn,d\mathcal{G}_{n,d} induced by SS is HH. The result holds for any d=o(n1/3)d=o(n^{1/3}) and is further extended to Gn,d\mathcal{G}_{n,{\bf d}}, the probability space of random graphs with given degree sequence d\bf d. This result provides a basic tool for studying properties, for instance the existence or the counts, of certain types of induced subgraphs

    Pegging Graphs Yields a Small Diameter

    Get PDF
    We consider the following process for generating large random cubic graphs. Starting with a given graph, repeatedly add edges that join the midpoints of two randomly chosen edges. We show that the growing graph asymptotically almost surely has logarithmic diameter. This process is motivated by a particular type of peer-to-peer network. Our method extends to similar processes that generate regular graphs of higher degre

    Too Interconnected To Fail: Financial Contagion and Systemic Risk in Network Model of CDS and Other Credit Enhancement Obligations of US Banks

    Get PDF
    Credit default swaps (CDS) which constitute up to 98% of credit derivatives have had a unique, endemic and pernicious role to play in the current financial crisis. However, there are few in depth empirical studies of the financial network interconnections among banks and between banks and nonbanks involved as CDS protection buyers and protection sellers. The ongoing problems related to technical insolvency of US commercial banks is not just confined to the so called legacy/toxic RMBS assets on balance sheets but also because of their credit risk exposures from SPVs (Special Purpose Vehicles) and the CDS markets. The dominance of a few big players in the chains of insurance and reinsurance for CDS credit risk mitigation for banksïżœ assets has led to the idea of ïżœtoo interconnected to failïżœ resulting, as in the case of AIG, of having to maintain the fiction of non-failure in order to avert a credit event that can bring down the CDS pyramid and the financial system. This paper also includes a brief discussion of the complex system Agent-based Computational Economics (ACE) approach to financial network modeling for systemic risk assessment. Quantitative analysis is confined to the empirical reconstruction of the US CDS network based on the FDIC Q4 2008 data in order to conduct a series of stress tests that investigate the consequences of the fact that top 5 US banks account for 92% of the US bank activity in the $34 tn global gross notional value of CDS for Q4 2008 (see, BIS and DTCC). The May-Wigner stability condition for networks is considered for the hub like dominance of a few financial entities in the US CDS structures to understand the lack of robustness. We provide a Systemic Risk Ratio and an implementation of concentration risk in CDS settlement for major US banks in terms of the loss of aggregate core capital. We also compare our stress test results with those provided by SCAP (Supervisory Capital Assessment Program). Finally, in the context of the Basel II credit risk transfer and synthetic securitization framework, there is little evidence that the CDS market predicated on a system of offsets to minimize final settlement can provide the credit risk mitigation sought by banks for reference assets in the case of a significant credit event. The large negative externalities that arise from a lack of robustness of the CDS financial network from the demise of a big CDS seller undermines the justification in Basel II that banks be permitted to reduce capital on assets that have CDS guarantees. We recommend that the Basel II provision for capital reduction on bank assets that have CDS cover should be discontinued.

    Too Interconnected To Fail: Financial Contagion and Systemic Risk In Network Model of CDS and Other Credit Enhancement Obligations of US Banks

    Get PDF
    Credit default swaps (CDS) which constitute up to 98% of credit derivatives have had a unique, endemic and pernicious role to play in the current financial crisis. However, there are few in depth empirical studies of the financial network interconnections among banks and between banks and nonbanks involved as CDS protection buyers and protection sellers. The ongoing problems related to technical insolvency of US commercial banks is not just confined to the so called legacy/toxic RMBS assets on balance sheets but also because of their credit risk exposures from SPVs (Special Purpose Vehicles) and the CDS markets. The dominance of a few big players in the chains of insurance and reinsurance for CDS credit risk mitigation for banks’ assets has led to the idea of “too interconnected to fail” resulting, as in the case of AIG, of having to maintain the fiction of non-failure in order to avert a credit event that can bring down the CDS pyramid and the financial system. This paper also includes a brief discussion of the complex system Agent-based Computational Economics (ACE) approach to financial network modeling for systemic risk assessment. Quantitative analysis is confined to the empirical reconstruction of the US CDS network based on the FDIC Q4 2008 data in order to conduct a series of stress tests that investigate the consequences of the fact that top 5 US banks account for 92% of the US bank activity in the $34 tn global gross notional value of CDS for Q4 2008 (see, BIS and DTCC). The May-Wigner stability condition for networks is considered for the hub like dominance of a few financial entities in the US CDS structures to understand the lack of robustness. We provide a Systemic Risk Ratio and an implementation of concentration risk in CDS settlement for major US banks in terms of the loss of aggregate core capital. We also compare our stress test results with those provided by SCAP (Supervisory Capital Assessment Program). Finally, in the context of the Basel II credit risk transfer and synthetic securitization framework, there is little evidence that the CDS market predicated on a system of offsets to minimize final settlement can provide the credit risk mitigation sought by banks for reference assets in the case of a significant credit event. The large negative externalities that arise from a lack of robustness of the CDS financial network from the demise of a big CDS seller undermines the justification in Basel II that banks be permitted to reduce capital on assets that have CDS guarantees. We recommend that the Basel II provision for capital reduction on bank assets that have CDS cover should be discontinued.Credit Default Swaps; Financial Networks; Systemic Risk; Agent BasedCredit Default Swaps, Financial Networks, Systemic Risk, Agent Based Models, Complex Systems, Stress Testing

    Too Interconnected To Fail: Financial Contagion and Systemic Risk in Network Model of CDS and Other Credit Enhancement Obligations of US Banks

    Get PDF
    Credit default swaps (CDS) which constitute up to 98 of credit derivatives have had a unique, endemic and pernicious role to play in the current financial crisis. However, there are few in depth empirical studies of the financial network interconnections among banks and between banks and nonbanks involved as CDS protection buyers and protection sellers. The ongoing problems related to technical insolvency of US commercial banks is not just confined to the so called legacy/toxic RMBS assets on balance sheets but also because of their credit risk exposures from SPVs (Special Purpose Vehicles) and the CDS markets. The dominance of a few big players in the chains of insurance and reinsurance for CDS credit risk mitigation for banks’ assets has led to the idea of “too interconnected to fail” resulting, as in the case of AIG, of having to maintain the fiction of non-failure in order to avert a credit event that can bring down the CDS pyramid and the financial system. This paper also includes a brief discussion of the complex system Agent-based Computational Economics (ACE) approach to financial network modeling for systemic risk assessment. Quantitative analysis is confined to the empirical reconstruction of the US CDS network based on the FDIC Q4 2008 data in order to conduct a series of stress tests that investigate the consequences of the fact that top 5 US banks account for 92 of the US bank activity in the 34 tn global gross notional value of CDS for Q4 2008 (see, BIS and DTCC). The May-Wigner stability condition for networks is considered for the hub like dominance of a few financial entities in the US CDS structures to understand the lack of robustness. We provide a Systemic Risk Ratio and an implementation of concentration risk in CDS settlement for major US banks in terms of the loss of aggregate core capital. We also compare our stress test results with those provided by SCAP (Supervisory Capital Assessment Program). Finally, in the context of the Basel II credit risk transfer and synthetic securitization framework, there is little evidence that the CDS market predicated on a system of offsets to minimize final settlement can provide the credit risk mitigation sought by banks for reference assets in the case of a significant credit event. The large negative externalities that arise from a lack of robustness of the CDS financial network from the demise of a big CDS seller undermines the justification in Basel II that banks be permitted to reduce capital on assets that have CDS guarantees. We recommend that the Basel II provision for capital reduction on bank assets that have CDS cover should be discontinued

    Reconstruction of gasoline engine in-cylinder pressures using recurrent neural networks

    Get PDF
    Knowledge of the pressure inside the combustion chamber of a gasoline engine would provide very useful information regarding the quality and consistency of combustion and allow significant improvements in its control, leading to improved efficiency and refinement. While measurement using incylinder pressure transducers is common in laboratory tests, their use in production engines is very limited due to cost and durability constraints. This thesis seeks to exploit the time series prediction capabilities of recurrent neural networks in order to build an inverse model accepting crankshaft kinematics or cylinder block vibrations as inputs for the reconstruction of in-cylinder pressures. Success in this endeavour would provide information to drive a real time combustion control strategy using only sensors already commonly installed on production engines. A reference data set was acquired from a prototype Ford in-line 3 cylinder direct injected, spark ignited gasoline engine of 1.125 litre swept volume. Data acquired concentrated on low speed (1000-2000 rev/min), low load (10-30 Nm brake torque) test conditions. The experimental work undertaken is described in detail, along with the signal processing requirements to treat the data prior to presentation to a neural network. The primary problem then addressed is the reliable, efficient training of a recurrent neural network to result in an inverse model capable of predicting cylinder pressures from data not seen during the training phase, this unseen data includes examples from speed and load ranges other than those in the training case. The specific recurrent network architecture investigated is the non-linear autoregressive with exogenous inputs (NARX) structure. Teacher forced training is investigated using the reference engine data set before a state of the art recurrent training method (Robust Adaptive Gradient Descent – RAGD) is implemented and the influence of the various parameters surrounding input vectors, network structure and training algorithm are investigated. Optimum parameters for data, structure and training algorithm are identified

    A Novel Approach To Intelligent Navigation Of A Mobile Robot In A Dynamic And Cluttered Indoor Environment

    Get PDF
    The need and rationale for improved solutions to indoor robot navigation is increasingly driven by the influx of domestic and industrial mobile robots into the market. This research has developed and implemented a novel navigation technique for a mobile robot operating in a cluttered and dynamic indoor environment. It divides the indoor navigation problem into three distinct but interrelated parts, namely, localization, mapping and path planning. The localization part has been addressed using dead-reckoning (odometry). A least squares numerical approach has been used to calibrate the odometer parameters to minimize the effect of systematic errors on the performance, and an intermittent resetting technique, which employs RFID tags placed at known locations in the indoor environment in conjunction with door-markers, has been developed and implemented to mitigate the errors remaining after the calibration. A mapping technique that employs a laser measurement sensor as the main exteroceptive sensor has been developed and implemented for building a binary occupancy grid map of the environment. A-r-Star pathfinder, a new path planning algorithm that is capable of high performance both in cluttered and sparse environments, has been developed and implemented. Its properties, challenges, and solutions to those challenges have also been highlighted in this research. An incremental version of the A-r-Star has been developed to handle dynamic environments. Simulation experiments highlighting properties and performance of the individual components have been developed and executed using MATLAB. A prototype world has been built using the WebotsTM robotic prototyping and 3-D simulation software. An integrated version of the system comprising the localization, mapping and path planning techniques has been executed in this prototype workspace to produce validation results

    Polymers in Fractal Disorder

    Get PDF
    This work presents a numerical investigation of self-avoiding walks (SAWs) on percolation clusters, a canonical model for polymers in disordered media. A new algorithm has been developed allowing exact enumeration of over ten thousand steps. This is an increase of several orders of magnitude compared to previously existing enumeration methods, which allow for barely more than forty steps. Such an increase is achieved by exploiting the fractal structure of critical percolation clusters: they are hierarchically organized into a tree of loosely connected nested regions in which the walks segments are enumerated separately. After the enumeration process, a region is \"decimated\" and behaves in the following effectively as a single point. Since this method only works efficiently near the percolation threshold, a chain-growth Monte Carlo algorithm has also been used. Main focus of the investigations was the asymptotic scaling behavior of the average end-to-end distance as function of the number of steps on critical clusters in different dimensions. Thanks the highly efficient new method, existing estimates of the scaling exponents could be improved substantially. Also investigated were the number of possible chain conformation and the average entropy, which were found to follow an unusual scaling behavior. For concentrations above the percolation threshold the exponent describing the growth of the end-to-end distance turned out to differ from that on regular lattices, defying the prediction of the accepted theory. Finally, SAWs with short range attractions on percolation clusters are discussed. Here, it emerged that there seems to be no temperature-driven collapse transition as the asymptotic scaling behavior of the end-to-end distance even at zero temperature is the same as for athermal SAWs.Die vorliegenden Arbeit prĂ€sentiert eine numerische Studie von selbstvermeidenden Zufallswegen (SAWs) auf Perkolationsclustern, ein kanonisches Modell fĂŒr Polymere in stark ungeordneten Medien. HierfĂŒr wurde ein neuer Algorithmus entwickelt, welcher es ermöglicht SAWs von mehr als zehntausend Schritten exakt auszuzĂ€hlen. Dies bedeutet eine Steigerung von mehreren GrĂ¶ĂŸenordnungen gegenĂŒber der zuvor existierenden Methode, welche kaum mehr als vierzig Schritte zulĂ€sst. Solch eine Steigerung wird erreicht, indem die fraktale Struktur der Perkolationscluster geziehlt ausgenutzt wird: Die Cluster werden hierarchisch in lose verbundene Gebiete unterteilt, innerhalb welcher WegstĂŒcke separat ausgezĂ€hlt werden können. Nach dem AuszĂ€hlen wird ein Gebiet \"dezimiert\" und verhĂ€lt sich wĂ€hrend der Behandlung grĂ¶ĂŸerer Gebiete effektiv wie ein Gitterpunkt. Da diese neue Methode nur nahe der Perkolationsschwelle funktioniert, wurde zum Erzielen der Ergebnisse zudem ein Kettenwachstums-Monte-Carlo-Algorithmus (PERM) eingesetzt. Untersucht wurde zunĂ€chst das asymptotische Skalenverhalten des Abstands der beiden Kettenenden als Funktion der Schrittzahl auf kritischen Clustern in verschiedenen Dimensionen. Dank der neuen hochperformanten Methode konnten die bisherigen SchĂ€tzer fĂŒr den dies beschreibenden Exponenten signifikant verbessert werden. Neben dem Abstand wurde zudem die Anzahl der möglichen Konformationen und die mittlere Entropie angeschaut, fĂŒr welche ein ungewöhnliches Skalenverhalten gefunden wurde. FĂŒr Konzentrationen oberhalb der Perkolationsschwelle wurde festgestellt, dass der Exponent, welcher das Wachstum des Endabstands beschreibt, nicht dem fĂŒr freie SAWs entspricht, was nach gĂ€ngiger Lehrmeinung der Fall sein sollte. Schlussendlich wurden SAWs mit Anziehung zwischen benachbarten Monomeren untersucht. Hier zeigte sich, dass es auf kritischen Perkolationsclustern keinen PhasenĂŒbergang zu geben scheint, an welchem die Ketten kollabieren, sondern dass das Skalenverhalten des Endabstands selbst am absoluten Nullpunkt der Temperatur unverĂ€ndert ist

    A decision-making model to guide securing blockchain deployments

    Get PDF
    Satoshi Nakamoto, the pseudo-identity accredit with the paper that sparked the implementation of Bitcoin, is famously quoted as remarking, electronically of course, that “If you don’t believe it or don’t get it, I don’t have time to try and convince you, sorry” (Tsapis, 2019, p. 1). What is noticeable, 12 years after the famed Satoshi paper that initiated Bitcoin (Nakamoto, 2008), is that blockchain at the very least has staying power and potentially wide application. A lesser known figure Marc Kenisberg, founder of Bitcoin Chaser which is one of the many companies formed around the Bitcoin ecosystem, summarised it well saying “
Blockchain is the tech - Bitcoin is merely the first mainstream manifestation of its potential” (Tsapis, 2019, p. 1). With blockchain still trying to reach its potential and still maturing on its way towards a mainstream technology the main question that arises for security professionals is how do I ensure we do it securely? This research seeks to address that question by proposing a decision-making model that can be used by a security professional to guide them through ensuring appropriate security for blockchain deployments. This research is certainly not the first attempt at discussing the security of the blockchain and will not be the last, as the technology around blockchain and distributed ledger technology is still rapidly evolving. What this research does try to achieve is not to delve into extremely specific areas of blockchain security, or get bogged down in technical details, but to provide a reference framework that aims to cover all the major areas to be considered. The approach followed was to review the literature regarding blockchain and to identify the main security areas to be addressed. It then proposes a decision-making model and tests the model against a fictitious but relevant real-world example. It concludes with learnings from this research. The reader can be the judge, but the model aims to be a practical valuable resource to be used by any security professional, to navigate the security aspects logically and understandably when being involved in a blockchain deployment. In contrast to the Satoshi quote, this research tries to convince the reader and assist him/her in understanding the security choices related to every blockchain deployment.Thesis (MSc) -- Faculty of Science, Computer Science, 202
    • 

    corecore