13,497 research outputs found

    XML Schema-based Minification for Communication of Security Information and Event Management (SIEM) Systems in Cloud Environments

    Get PDF
    XML-based communication governs most of today's systems communication, due to its capability of representing complex structural and hierarchical data. However, XML document structure is considered a huge and bulky data that can be reduced to minimize bandwidth usage, transmission time, and maximize performance. This contributes to a more efficient and utilized resource usage. In cloud environments, this affects the amount of money the consumer pays. Several techniques are used to achieve this goal. This paper discusses these techniques and proposes a new XML Schema-based Minification technique. The proposed technique works on XML Structure reduction using minification. The proposed technique provides a separation between the meaningful names and the underlying minified names, which enhances software/code readability. This technique is applied to Intrusion Detection Message Exchange Format (IDMEF) messages, as part of Security Information and Event Management (SIEM) system communication hosted on Microsoft Azure Cloud. Test results show message size reduction ranging from 8.15% to 50.34% in the raw message, without using time-consuming compression techniques. Adding GZip compression to the proposed technique produces 66.1% shorter message size compared to original XML messages.Comment: XML, JSON, Minification, XML Schema, Cloud, Log, Communication, Compression, XMill, GZip, Code Generation, Code Readability, 9 pages, 12 figures, 5 tables, Journal Articl

    Comparison of Alternative Meat Inspection Regimes for Pigs From Non-Controlled Housing – Considering the Cost of Error

    Get PDF
    Denmark has not had cases of bovine tuberculosis (bovTB) for more than 30 years but is obliged by trade agreements to undertake traditional meat inspection (TMI) of finisher pigs from non-controlled housing to detect bovTB. TMI is associated with higher probability of detecting bovTB but is also more costly than visual-only inspection (VOI). To identify whether VOI should replace TMI of finisher pigs from non-controlled housing, the cost of error – defined here as probability of overlooking infection and associated economic costs - should be assessed and compared with surveillance costs. First, a scenario tree model was set up to assess the ability of detecting bovTB in an infected herd (HSe) calculated for three within-herd prevalences, WHP (1, 5 and 10%), for four different surveillance scenarios (TMI and VOI with or without serological test, respectively). HSe was calculated for six consecutive 4-week surveillance periods until predicted bovTB detection (considered high-risk periods HRP). 1-HSe was probability of missing all positives by each HRP. Next, probability of spread of infection, Pspread, and number of infected animals moved were calculated for each HRP. Costs caused by overlooking bovTB were calculated taking into account Pspread, 1-HSe, eradication costs, and trade impact. Finally, the average annual costs were calculated by adding surveillance costs and assuming one incursion of bovTB in either 1, 10 or 30 years. Input parameters were based on slaughterhouse statistics, literature and expert opinion. Herd sensitivity increased by high-risk period and within-herd prevalence. Assuming WHP=5%, HSe reached median 90% by 2nd HRP for TMI, whereas for VOI this would happen after 6th HRP. Serology had limited impact on HSe. The higher the probability of infection, the higher the probability of detection and spread. TMI resulted in lowest average annual costs, if one incursion of bovTB was expected every year. However, when assuming one introduction in 10 or 30 years, VOI resulted in lowest average costs. It may be more cost-effective to focus on imported high-risk animals coming into contact with Danish livestock, instead of using TMI as surveillance on all pigs from non-controlled housing

    Distributed Weight Selection in Consensus Protocols by Schatten Norm Minimization

    Full text link
    In average consensus protocols, nodes in a network perform an iterative weighted average of their estimates and those of their neighbors. The protocol converges to the average of initial estimates of all nodes found in the network. The speed of convergence of average consensus protocols depends on the weights selected on links (to neighbors). We address in this paper how to select the weights in a given network in order to have a fast speed of convergence for these protocols. We approximate the problem of optimal weight selection by the minimization of the Schatten p-norm of a matrix with some constraints related to the connectivity of the underlying network. We then provide a totally distributed gradient method to solve the Schatten norm optimization problem. By tuning the parameter p in our proposed minimization, we can simply trade-off the quality of the solution (i.e. the speed of convergence) for communication/computation requirements (in terms of number of messages exchanged and volume of data processed). Simulation results show that our approach provides very good performance already for values of p that only needs limited information exchange. The weight optimization iterative procedure can also run in parallel with the consensus protocol and form a joint consensus-optimization procedure.Comment: N° RR-8078 (2012

    Design and Analysis of Distributed Averaging with Quantized Communication

    Get PDF
    Consider a network whose nodes have some initial values, and it is desired to design an algorithm that builds on neighbor to neighbor interactions with the ultimate goal of convergence to the average of all initial node values or to some value close to that average. Such an algorithm is called generically "distributed averaging," and our goal in this paper is to study the performance of a subclass of deterministic distributed averaging algorithms where the information exchange between neighboring nodes (agents) is subject to uniform quantization. With such quantization, convergence to the precise average cannot be achieved in general, but the convergence would be to some value close to it, called quantized consensus. Using Lyapunov stability analysis, we characterize the convergence properties of the resulting nonlinear quantized system. We show that in finite time and depending on initial conditions, the algorithm will either cause all agents to reach a quantized consensus where the consensus value is the largest quantized value not greater than the average of their initial values, or will lead all variables to cycle in a small neighborhood around the average. In the latter case, we identify tight bounds for the size of the neighborhood and we further show that the error can be made arbitrarily small by adjusting the algorithm's parameters in a distributed manner

    Convergence Hypotheses are Ill-Posed:Non-stationarity of Cross-Country Income Distribution D

    Get PDF
    The recent literature on “convergence� of cross-country per capita incomes has been dominated by two competing hypotheses: “global convergence� and “club-convergence�. This debate has recently relied on the study of limiting distributions of estimated income distribution dynamics. Utilizing new measures of “stochastic stability�, we establish two stylized facts that question the fruitfulness of the literature’s focus on asymptotic income distributions. The first stylized fact is non-stationarity of transition dynamics, in the sense of changing transition kernels, which renders all “convergence� hypotheses that make long-term predictions on income distribution, based on relatively short time series, less meaningful. The second stylized fact is the periodic emergence, disappearance, and re-emergence of a “stochastically stable� middle-income group. We show that the probability of escaping a low-income poverty-trap depends on the existence of such a stable middle income group. While this does not answer the perennial questions about long-term effects of globalization on the cross-country income distribution, it does shed some light on the types of environments that are conducive to narrowing/global income distribution; convergence clubs; transition kernel; stochastic stability

    Q-ESP: a QoS-compliant Security Protocol to enrich IPSec Framework

    Get PDF
    IPSec is a protocol that allows to make secure connections between branch offices and allows secure VPN accesses. However, the efforts to improve IPSec are still under way; one aspect of this improvement is to take Quality of Service (QoS) requirements into account. QoS is the ability of the network to provide a service at an assured service level while optimizing the global usage of network resources. The QoS level that a flow receives depends on a six-bit identifier in the IP header; the so-called Differentiated Services code point (DSCP). Basically, Multi-Field classifiers classify a packet by inspecting IP/TCP headers, to decide how the packet should be processed. The current IPSec standard does hardly offer any guidance to do this, because the existing IPSec ESP security protocol hides much of this information in its encrypted payloads, preventing network control devices such as routers and switches from utilizing this information in performing classification appropriately. To solve this problem, we propose a QoS-friendly Encapsulated Security Payload (Q-ESP) as a new IPSec security protocol that provides both security and QoS supports. We also present our NetBSD kernel-based implementation as well as our evaluation results of Q-ESP
    corecore