507 research outputs found

    The determinants of multinational banking during the first globalization, 1870-1914

    Get PDF
    What determined the multinational expansion of European banks in the pre1914 era of globalization? And how were banks' foreign investments related to other facets of the globalizing world economy such as trade and capital flows? The paper reviews both the contemporary and historical literature, and empirically investigates these issues by using an original panel data based on a sample of more than 50 countries. The dependent variable, aiming at measuring the intensity of crossborder activities operated by banks from foreign locations, is the number of foreign branches and subsidiaries of British, French and German banks. Explanatory variables are mainly selected on the base of the eclectic theory of multinational banking, but also include geographical factors (as suggested by gravity models) and institutional indicators advanced by recent studies inspired by new institutional economics, such as legal families and adherence to the Gold Standard. These regressors captures the impact of economic integration (trade and capital flows), informational development, institutional and economic characteristics of the hostmarket, as well as exchange rate and country risk factors, on banks' foreign investment decisions. The results suggest that, due to its prevailing 'colonial' features, pre1914 multinational banking does not fit easily into augmented gravity models. The role of trade as a key determinant of banks expansion overseas is qualified, and both institutional factors as well as competitive interaction emerge as critical determinants of banks' decisions to invest in foreign countries. Moreover, the systematic comparison of determinants of foreign investiments of banks from major core countries reveals that multinational banking was not a homogenous phenomenon, as banks of different nationality responded differently to economic, geographical and institutional factor

    Aspects of proactive traffic engineering in IP networks

    Get PDF
    To deliver a reliable communication service over the Internet it is essential for the network operator to manage the traffic situation in the network. The traffic situation is controlled by the routing function which determines what path traffic follows from source to destination. Current practices for setting routing parameters in IP networks are designed to be simple to manage. This can lead to congestion in parts of the network while other parts of the network are far from fully utilized. In this thesis we explore issues related to optimization of the routing function to balance load in the network and efficiently deliver a reliable communication service to the users. The optimization takes into account not only the traffic situation under normal operational conditions, but also traffic situations that appear under a wide variety of circumstances deviating from the nominal case. In order to balance load in the network knowledge of the traffic situations is needed. Consequently, in this thesis we investigate methods for efficient derivation of the traffic situation. The derivation is based on estimation of traffic demands from link load measurements. The advantage of using link load measurements is that they are easily obtained and consist of a limited amount of data that need to be processed. We evaluate and demonstrate how estimation based on link counts gives the operator a fast and accurate description of the traffic demands. For the evaluation we have access to a unique data set of complete traffic demands from an operational IP backbone. However, to honor service level agreements at all times the variability of the traffic needs to be accounted for in the load balancing. In addition, optimization techniques are often sensitive to errors and variations in input data. Hence, when an optimized routing setting is subjected to real traffic demands in the network, performance often deviate from what can be anticipated from the optimization. Thus, we identify and model different traffic uncertainties and describe how the routing setting can be optimized, not only for a nominal case, but for a wide range of different traffic situations that might appear in the network. Our results can be applied in MPLS enabled networks as well as in networks using link state routing protocols such as the widely used OSPF and IS-IS protocols. Only minor changes may be needed in current networks to implement our algorithms. The contributions of this thesis is that we: demonstrate that it is possible to estimate the traffic matrix with acceptable precision, and we develop methods and models for common traffic uncertainties to account for these uncertainties in the optimization of the routing configuration. In addition, we identify important properties in the structure of the traffic to successfully balance uncertain and varying traffic demands

    Bayesian Computing with INLA: A Review

    Get PDF
    The key operation in Bayesian inference is to compute high-dimensional integrals. An old approximate technique is the Laplace method or approximation, which dates back to Pierre-Simon Laplace (1774). This simple idea approximates the integrand with a second-order Taylor expansion around the mode and computes the integral analytically. By developing a nested version of this classical idea, combined with modern numerical techniques for sparse matrices, we obtain the approach of integrated nested Laplace approximations (INLA) to do approximate Bayesian inference for latent Gaussian models (LGMs). LGMs represent an important model abstraction for Bayesian inference and include a large proportion of the statistical models used today. In this review, we discuss the reasons for the success of the INLA approach, the R-INLA package, why it is so accurate, why the approximations are very quick to compute, and why LGMs make such a useful concept for Bayesian computing

    Multi-Model Network Intrusion Detection System Using Distributed Feature Extraction and Supervised Learning

    Get PDF
    Intrusion Detection Systems (IDSs) monitor network traffic and system activities to identify any unauthorized or malicious behaviors. These systems usually leverage the principles of data science and machine learning to detect any deviations from normalcy by learning from the data associated with normal and abnormal patterns. The IDSs continue to suffer from issues like distributed high-dimensional data, inadequate robustness, slow detection, and high false-positive rates (FPRs). We investigate these challenges, determine suitable strategies, and propose relevant solutions based on the appropriate mathematical and computational concepts. To handle high-dimensional data in a distributed network, we optimize the feature space in a distributed manner using the PCA-based feature extraction method. The experimental results display that the classifiers built upon the features so extracted perform well by giving a similar level of accuracy as given by the ones that use the centrally extracted features. This method also significantly reduces the cumulative time needed for extraction. By utilizing the extracted features, we construct a distributed probabilistic classifier based on Naïve Bayes. Each node counts the local frequencies and passes those on to the central coordinator. The central coordinator accumulates the local frequencies and computes the global frequencies, which are used by the nodes to compute the required prior probabilities to perform classifications. Each node, being evenly trained, is capable of detecting intrusions individually to improve the overall robustness of the system. We also propose a similarity measure-based classification (SMC) technique that works by computing the cosine similarities between the class-specific frequential weights of the values in an observed instance and the average frequency-based data centroid. An instance is classified into the class whose weights for the values in it share the highest level of similarity with the centroid. SMC contributes alongside Naïve Bayes in a multi-model classification approach, which we introduce to reduce the FPR and improve the detection accuracy. This approach utilizes the similarities associated with each class label determined by SMC and the probabilities associated with each class label determined by Naïve Bayes. The similarities and probabilities are aggregated, separately, to form new features that are used to train and validate a tertiary classifier. We demonstrate that such a multi-model approach can attain a higher level of accuracy compared with the single-model classification techniques. The contributions made by this dissertation to enhance the scalability, robustness, and accuracy can help improve the efficacy of IDSs
    • …
    corecore