1,132 research outputs found

    Using Signal Processing Tools for Regulation Analysis and Implementation

    Get PDF
    Regulators often face the challenge of designing and implementing rules that both, respond to the policy objectives and that can be clearly referred to the day-to-day operations and practices in the marketplace. In many cases, the actual codes end up being a cumbersome collection of conditions that are very difficult to evaluate and re-design. This paper suggests that some of the most commonly used tools in Signal Processing could offer a convenient vehicle for tackling these difficulties. By starting from a SIMULINK(R) model of the regulation of Banco de Mexico on the foreign exchange transactions of commercial banks, this paper offers an example of how those tools could be used in this context.

    Secure high level communication protocol for CAN bus

    Get PDF
    The Controller Area Network (CAN bus) is a bus based on differential signalling originally developed for automotiv industry. The bus was later standardized under ISO 11898 and the standard describes data link layer as well as physica signalling. CAN bus allows precise settings of bus timing and sampling points, which makes it usable for varying range and baudrates. It also has a number of properties such as: message acknowledgement, collision avoidance, messag filtering and automatic retransmit of faulty messages. These properties make it suitable for many applications Furthermore, the bus is also well supported on microcontrollers and can even be found on larger SoCs. This makes th CAN bus ideal for microcontroller networks in buildings Unfortunately, the CAN protocol itself has no support for node authentication and message encryption so thes requirements has to be solved on higher layer. We present a high-level protocol for CAN bus that supports authenticatio and encryption and therefore allows usage of CAN bus in security dependent systems such as an access managemen system or in industrial automation

    Addressing practical challenges for anomaly detection in backbone networks

    Get PDF
    Network monitoring has always been a topic of foremost importance for both network operators and researchers for multiple reasons ranging from anomaly detection to tra c classi cation or capacity planning. Nowadays, as networks become more and more complex, tra c increases and security threats reproduce, achieving a deeper understanding of what is happening in the network has become an essential necessity. In particular, due to the considerable growth of cybercrime, research on the eld of anomaly detection has drawn signi cant attention in recent years and tons of proposals have been made. All the same, when it comes to deploying solutions in real environments, some of them fail to meet some crucial requirements. Taking this into account, this thesis focuses on lling this gap between the research and the non-research world. Prior to the start of this work, we identify several problems. First, there is a clear lack of detailed and updated information on the most common anomalies and their characteristics. Second, unawareness of sampled data is still common although the performance of anomaly detection algorithms is severely a ected. Third, operators currently need to invest many work-hours to manually inspect and also classify detected anomalies to act accordingly and take the appropriate mitigation measures. This is further exacerbated due to the high number of false positives and false negatives and because anomaly detection systems are often perceived as extremely complex black boxes. Analysing an issue is essential to fully comprehend the problem space and to be able to tackle it properly. Accordingly, the rst block of this thesis seeks to obtain detailed and updated real-world information on the most frequent anomalies occurring in backbone networks. It rst reports on the performance of di erent commercial systems for anomaly detection and analyses the types of network nomalies detected. Afterwards, it focuses on further investigating the characteristics of the anomalies found in a backbone network using one of the tools for more than half a year. Among other results, this block con rms the need of applying sampling in an operational environment as well as the unacceptably high number of false positives and false negatives still reported by current commercial tools. On the whole, the presence of ampling in large networks for monitoring purposes has become almost mandatory and, therefore, all anomaly detection algorithms that do not take that into account might report incorrect results. In the second block of this thesis, the dramatic impact of sampling on the performance of well-known anomaly detection techniques is analysed and con rmed. However, we show that the results change signi cantly depending on the sampling technique used and also on the common metric selected to perform the comparison. In particular, we show that, Packet Sampling outperforms Flow Sampling unlike previously reported. Furthermore, we observe that Selective Sampling (SES), a sampling technique that focuses on small ows, obtains much better results than traditional sampling techniques for scan detection. Consequently, we propose Online Selective Sampling, a sampling technique that obtains the same good performance for scan detection than SES but works on a per-packet basis instead of keeping all ows in memory. We validate and evaluate our proposal and show that it can operate online and uses much less resources than SES. Although the literature is plenty of techniques for detecting anomalous events, research on anomaly classi cation and extraction (e.g., to further investigate what happened or to share evidence with third parties involved) is rather marginal. This makes it harder for network operators to analise reported anomalies because they depend solely on their experience to do the job. Furthermore, this task is an extremely time-consuming and error-prone process. The third block of this thesis targets this issue and brings it together with the knowledge acquired in the previous blocks. In particular, it presents a system for automatic anomaly detection, extraction and classi cation with high accuracy and very low false positives. We deploy the system in an operational environment and show its usefulness in practice. The fourth and last block of this thesis presents a generalisation of our system that focuses on analysing all the tra c, not only network anomalies. This new system seeks to further help network operators by summarising the most signi cant tra c patterns in their network. In particular, we generalise our system to deal with big network tra c data. In particular, it deals with src/dst IPs, src/dst ports, protocol, src/dst Autonomous Systems, layer 7 application and src/dst geolocation. We rst deploy a prototype in the European backbone network of G EANT and show that it can process large amounts of data quickly and build highly informative and compact reports that are very useful to help comprehending what is happening in the network. Second, we deploy it in a completely di erent scenario and show how it can also be successfully used in a real-world use case where we analyse the behaviour of highly distributed devices related with a critical infrastructure sector.La monitoritzaci o de xarxa sempre ha estat un tema de gran import ancia per operadors de xarxa i investigadors per m ultiples raons que van des de la detecci o d'anomalies fins a la classi caci o d'aplicacions. Avui en dia, a mesura que les xarxes es tornen m es i m es complexes, augmenta el tr ansit de dades i les amenaces de seguretat segueixen creixent, aconseguir una comprensi o m es profunda del que passa a la xarxa s'ha convertit en una necessitat essencial. Concretament, degut al considerable increment del ciberactivisme, la investigaci o en el camp de la detecci o d'anomalies ha crescut i en els darrers anys s'han fet moltes i diverses propostes. Tot i aix o, quan s'intenten desplegar aquestes solucions en entorns reals, algunes d'elles no compleixen alguns requisits fonamentals. Tenint aix o en compte, aquesta tesi se centra a omplir aquest buit entre la recerca i el m on real. Abans d'iniciar aquest treball es van identi car diversos problemes. En primer lloc, hi ha una clara manca d'informaci o detallada i actualitzada sobre les anomalies m es comuns i les seves caracter stiques. En segona inst ancia, no tenir en compte la possibilitat de treballar amb nom es part de les dades (mostreig de tr ansit) continua sent bastant est es tot i el sever efecte en el rendiment dels algorismes de detecci o d'anomalies. En tercer lloc, els operadors de xarxa actualment han d'invertir moltes hores de feina per classi car i inspeccionar manualment les anomalies detectades per actuar en conseqüencia i prendre les mesures apropiades de mitigaci o. Aquesta situaci o es veu agreujada per l'alt nombre de falsos positius i falsos negatius i perqu e els sistemes de detecci o d'anomalies s on sovint percebuts com caixes negres extremadament complexes. Analitzar un tema es essencial per comprendre plenament l'espai del problema i per poder-hi fer front de forma adequada. Per tant, el primer bloc d'aquesta tesi pret en proporcionar informaci o detallada i actualitzada del m on real sobre les anomalies m es freqüents en una xarxa troncal. Primer es comparen tres eines comercials per a la detecci o d'anomalies i se n'estudien els seus punts forts i febles, aix com els tipus d'anomalies de xarxa detectats. Posteriorment, s'investiguen les caracter stiques de les anomalies que es troben en la mateixa xarxa troncal utilitzant una de les eines durant m es de mig any. Entre d'altres resultats, aquest bloc con rma la necessitat de l'aplicaci o de mostreig de tr ansit en un entorn operacional, aix com el nombre inacceptablement elevat de falsos positius i falsos negatius en eines comercials actuals. En general, el mostreig de tr ansit de dades de xarxa ( es a dir, treballar nom es amb una part de les dades) en grans xarxes troncals s'ha convertit en gaireb e obligatori i, per tant, tots els algorismes de detecci o d'anomalies que no ho tenen en compte poden veure seriosament afectats els seus resultats. El segon bloc d'aquesta tesi analitza i confi rma el dram atic impacte de mostreig en el rendiment de t ecniques de detecci o d'anomalies plenament acceptades a l'estat de l'art. No obstant, es mostra que els resultats canvien signi cativament depenent de la t ecnica de mostreig utilitzada i tamb e en funci o de la m etrica usada per a fer la comparativa. Contr ariament als resultats reportats en estudis previs, es mostra que Packet Sampling supera Flow Sampling. A m es, a m es, s'observa que Selective Sampling (SES), una t ecnica de mostreig que se centra en mostrejar fluxes petits, obt e resultats molt millors per a la detecci o d'escanejos que no pas les t ecniques tradicionals de mostreig. En conseqü encia, proposem Online Selective Sampling, una t ecnica de mostreig que obt e el mateix bon rendiment per a la detecci o d'escanejos que SES, per o treballa paquet per paquet enlloc de mantenir tots els fluxes a mem oria. Despr es de validar i evaluar la nostra proposta, demostrem que es capa c de treballar online i utilitza molts menys recursos que SES. Tot i la gran quantitat de tècniques proposades a la literatura per a la detecci o d'esdeveniments an omals, la investigaci o per a la seva posterior classi caci o i extracci o (p.ex., per investigar m es a fons el que va passar o per compartir l'evid encia amb tercers involucrats) es m es aviat marginal. Aix o fa que sigui m es dif cil per als operadors de xarxa analalitzar les anomalies reportades, ja que depenen unicament de la seva experi encia per fer la feina. A m es a m es, aquesta tasca es un proc es extremadament lent i propens a errors. El tercer bloc d'aquesta tesi se centra en aquest tema tenint tamb e en compte els coneixements adquirits en els blocs anteriors. Concretament, presentem un sistema per a la detecci o extracci o i classi caci o autom atica d'anomalies amb una alta precisi o i molt pocs falsos positius. Adicionalment, despleguem el sistema en un entorn operatiu i demostrem la seva utilitat pr actica. El quart i ultim bloc d'aquesta tesi presenta una generalitzaci o del nostre sistema que se centra en l'an alisi de tot el tr ansit, no nom es en les anomalies. Aquest nou sistema pret en ajudar m es als operadors ja que resumeix els patrons de tr ansit m es importants de la seva xarxa. En particular, es generalitza el sistema per fer front al "big data" (una gran quantitat de dades). En particular, el sistema tracta IPs origen i dest i, ports origen i destí , protocol, Sistemes Aut onoms origen i dest , aplicaci o que ha generat el tr ansit i fi nalment, dades de geolocalitzaci o (tamb e per origen i dest ). Primer, despleguem un prototip a la xarxa europea per a la recerca i la investigaci o (G EANT) i demostrem que el sistema pot processar grans quantitats de dades r apidament aix com crear informes altament informatius i compactes que s on de gran utilitat per ajudar a comprendre el que est a succeint a la xarxa. En segon lloc, despleguem la nostra eina en un escenari completament diferent i mostrem com tamb e pot ser utilitzat amb exit en un cas d' us en el m on real en el qual s'analitza el comportament de dispositius altament distribuïts

    Reconciliation, Restoration and Reconstruction of a Conflict Ridden Country

    Get PDF
    Conflict has sadly been a constant part of history. Winning a conflict and making a lasting peace are often not the same thing. While a peace treaty ends a conflict and often dictates terms from the winners’ perspective, it may not create a lasting peace. Short of unconditional surrender, modern conflict ends with a negotiated cessation of hostilities. Such accords may have some initial reconstruction agreements, but Reconciliation, Restoration and Reconstruction (RRR) is a long term process. This study maintains that to achieve a lasting peace: 1) The culture and beliefs of the conflict nation must be continuously considered and 2) RRR is a long term effort which will occur over years not just in the immediate wake of signing a treaty or agreement. To assure the inclusion of all stakeholders and gain the best results in dealing with this “wicked problem”, an array of Operations Research techniques can be used to support the long term planning and execution of a RRR effort. The final decisions will always be political, but the analysis provided by an OR support team will guide the decision makers to better execute consensus decisions that consider all stakeholder needs. The development of the value hierarchy framework in this dissertation is a keystone of building a rational OR supported long term plan for a successful RRR. The primary aim of the research is to propose a framework and associated set of guidelines derived from appropriate techniques of OR, Decision Analysis and Project Management (right from development of a consensus based value hierarchy to its implementation, feedback and steering corrections) that may be applied to help RRR efforts in any conflict ridden country across the globe. The framework is applicable to any conflict ridden country after incorporating changes particular to any country witnessing a prolonged conflict

    A literature review on complexity of financial regulation

    Get PDF
    Since the financial crisis, which finally aroused the public vigilance against inefficient regulation, there have been remarkable literatures on over-complexity and several rounds of reform in financial regulation. Yet, some relevant questions still remain unanswered. This paper, in the form of literature review, summaries the most critical problems on complexity and the efforts have been made so far to solve them. Started from Introducing the importance of complexity through regulation and crises, the complexity of financial regulation is presented by time first, showing complexity significantly expanded after Basel II. Next several comparisons between complex and simple regulatory rules reveal the its incapability. Then incentives behind complexity, three economic theories, and three typical methods of quantitative analysis are discussed sequentially, implying over-complexity is self-filling and detrimental. And it is imperative for regulators to find a simple and transparent replacement. While some plausible solutions have been tried, the validity of most is limited, among which market-based approaches seems to be most promising. Looking further, complexity might mitigate in the form of regulatory rules, it, however, will pervade the whole regulation framework by more reliance on supervisory discretion

    SIMULATION ANALYSIS OF USMC HIMARS EMPLOYMENT IN THE WESTERN PACIFIC

    Get PDF
    As a result of renewed focus on great power competition, the United States Marine Corps is currently undergoing a comprehensive force redesign. In accordance with the Commandant’s Planning Guidance and Force Design 2030, this redesign includes an increase of 14 rocket artillery batteries while divesting 14 cannon artillery batteries. These changes necessitate study into tactics and capabilities for rocket artillery against a peer threat in the Indo-Pacific region. This thesis implements an efficient design of experiments to simulate over 1.6 million Taiwan invasions using a stochastic, agent-based combat model. Varying tactics and capabilities as input, the model returns measures of effectiveness to serve as the response in metamodels, which are then analyzed for critical factors, interactions, and change points. The analysis provides insight into the principal factors affecting lethality and survivability for ground-based rocket fires. The major findings from this study include the need for increasingly distributed artillery formations, highly mobile launchers that can emplace and displace quickly, and the inadequacy of the unitary warheads currently employed by HIMARS units. Solutions robust to adversary actions and simulation variability can inform wargames and future studies as the Marine Corps continues to adapt in preparation for potential peer conflict.Captain, United States Marine CorpsApproved for public release. Distribution is unlimited
    corecore