86 research outputs found

    DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report

    Full text link
    The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies, resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students

    Cloud Computing Security, An Intrusion Detection System for Cloud Computing Systems

    Get PDF
    Cloud computing is widely considered as an attractive service model because it minimizes investment since its costs are in direct relation to usage and demand. However, the distributed nature of cloud computing environments, their massive resource aggregation, wide user access and efficient and automated sharing of resources enable intruders to exploit clouds for their advantage. To combat intruders, several security solutions for cloud environments adopt Intrusion Detection Systems. However, most IDS solutions are not suitable for cloud environments, because of problems such as single point of failure, centralized load, high false positive alarms, insufficient coverage for attacks, and inflexible design. The thesis defines a framework for a cloud based IDS to face the deficiencies of current IDS technology. This framework deals with threats that exploit vulnerabilities to attack the various service models of a cloud system. The framework integrates behaviour based and knowledge based techniques to detect masquerade, host, and network attacks and provides efficient deployments to detect DDoS attacks. This thesis has three main contributions. The first is a Cloud Intrusion Detection Dataset (CIDD) to train and test an IDS. The second is the Data-Driven Semi-Global Alignment, DDSGA, approach and three behavior based strategies to detect masquerades in cloud systems. The third and final contribution is signature based detection. We introduce two deployments, a distributed and a centralized one to detect host, network, and DDoS attacks. Furthermore, we discuss the integration and correlation of alerts from any component to build a summarized attack report. The thesis describes in details and experimentally evaluates the proposed IDS and alternative deployments. Acknowledgment: =============== • This PH.D. is achieved through an international joint program with a collaboration between University of Pisa in Italy (Department of Computer Science, Galileo Galilei PH.D. School) and University of Arizona in USA (College of Electrical and Computer Engineering). • The PHD topic is categorized in both Computer Engineering and Information Engineering topics. • The thesis author is also known as "Hisham A. Kholidy"

    Reconfigurable middleware architectures for large scale sensor networks

    Get PDF
    Wireless sensor networks, in an effort to be energy efficient, typically lack the high-level abstractions of advanced programming languages. Though strong, the dichotomy between these two paradigms can be overcome. The SENSIX software framework, described in this dissertation, uniquely integrates constraint-dominated wireless sensor networks with the flexibility of object-oriented programming models, without violating the principles of either. Though these two computing paradigms are contradictory in many ways, SENSIX bridges them to yield a dynamic middleware abstraction unifying low-level resource-aware task reconfiguration and high-level object recomposition. Through the layered approach of SENSIX, the software developer creates a domain-specific sensing architecture by defining a customized task specification and utilizing object inheritance. In addition, SENSIX performs better at large scales (on the order of 1000 nodes or more) than other sensor network middleware which do not include such unified facilities for vertical integration

    Timely processing of big data in collaborative large-scale distributed systems

    Get PDF
    Today’s Big Data phenomenon, characterized by huge volumes of data produced at very high rates by heterogeneous and geographically dispersed sources, is fostering the employment of large-scale distributed systems in order to leverage parallelism, fault tolerance and locality awareness with the aim of delivering suitable performances. Among the several areas where Big Data is gaining increasing significance, the protection of Critical Infrastructure is one of the most strategic since it impacts on the stability and safety of entire countries. Intrusion detection mechanisms can benefit a lot from novel Big Data technologies because these allow to exploit much more information in order to sharpen the accuracy of threats discovery. A key aspect for increasing even more the amount of data at disposal for detection purposes is the collaboration (meant as information sharing) among distinct actors that share the common goal of maximizing the chances to recognize malicious activities earlier. Indeed, if an agreement can be found to share their data, they all have the possibility to definitely improve their cyber defenses. The abstraction of Semantic Room (SR) allows interested parties to form trusted and contractually regulated federations, the Semantic Rooms, for the sake of secure information sharing and processing. Another crucial point for the effectiveness of cyber protection mechanisms is the timeliness of the detection, because the sooner a threat is identified, the faster proper countermeasures can be put in place so as to confine any damage. Within this context, the contributions reported in this thesis are threefold * As a case study to show how collaboration can enhance the efficacy of security tools, we developed a novel algorithm for the detection of stealthy port scans, named R-SYN (Ranked SYN port scan detection). We implemented it in three distinct technologies, all of them integrated within an SR-compliant architecture that allows for collaboration through information sharing: (i) in a centralized Complex Event Processing (CEP) engine (Esper), (ii) in a framework for distributed event processing (Storm) and (iii) in Agilis, a novel platform for batch-oriented processing which leverages the Hadoop framework and a RAM-based storage for fast data access. Regardless of the employed technology, all the evaluations have shown that increasing the number of participants (that is, increasing the amount of input data at disposal), allows to improve the detection accuracy. The experiments made clear that a distributed approach allows for lower detection latency and for keeping up with higher input throughput, compared with a centralized one. * Distributing the computation over a set of physical nodes introduces the issue of improving the way available resources are assigned to the elaboration tasks to execute, with the aim of minimizing the time the computation takes to complete. We investigated this aspect in Storm by developing two distinct scheduling algorithms, both aimed at decreasing the average elaboration time of the single input event by decreasing the inter-node traffic. Experimental evaluations showed that these two algorithms can improve the performance up to 30%. * Computations in online processing platforms (like Esper and Storm) are run continuously, and the need of refining running computations or adding new computations, together with the need to cope with the variability of the input, requires the possibility to adapt the resource allocation at runtime, which entails a set of additional problems. Among them, the most relevant concern how to cope with incoming data and processing state while the topology is being reconfigured, and the issue of temporary reduced performance. At this aim, we also explored the alternative approach of running the computation periodically on batches of input data: although it involves a performance penalty on the elaboration latency, it allows to eliminate the great complexity of dynamic reconfigurations. We chose Hadoop as batch-oriented processing framework and we developed some strategies specific for dealing with computations based on time windows, which are very likely to be used for pattern recognition purposes, like in the case of intrusion detection. Our evaluations provided a comparison of these strategies and made evident the kind of performance that this approach can provide

    Intelligent Sensor Networks

    Get PDF
    In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts

    Management And Security Of Multi-Cloud Applications

    Get PDF
    Single cloud management platform technology has reached maturity and is quite successful in information technology applications. Enterprises and application service providers are increasingly adopting a multi-cloud strategy to reduce the risk of cloud service provider lock-in and cloud blackouts and, at the same time, get the benefits like competitive pricing, the flexibility of resource provisioning and better points of presence. Another class of applications that are getting cloud service providers increasingly interested in is the carriers\u27 virtualized network services. However, virtualized carrier services require high levels of availability and performance and impose stringent requirements on cloud services. They necessitate the use of multi-cloud management and innovative techniques for placement and performance management. We consider two classes of distributed applications – the virtual network services and the next generation of healthcare – that would benefit immensely from deployment over multiple clouds. This thesis deals with the design and development of new processes and algorithms to enable these classes of applications. We have evolved a method for optimization of multi-cloud platforms that will pave the way for obtaining optimized placement for both classes of services. The approach that we have followed for placement itself is predictive cost optimized latency controlled virtual resource placement for both types of applications. To improve the availability of virtual network services, we have made innovative use of the machine and deep learning for developing a framework for fault detection and localization. Finally, to secure patient data flowing through the wide expanse of sensors, cloud hierarchy, virtualized network, and visualization domain, we have evolved hierarchical autoencoder models for data in motion between the IoT domain and the multi-cloud domain and within the multi-cloud hierarchy

    Doctor of Philosophy

    Get PDF
    dissertationLow-cost wireless embedded systems can make radio channel measurements for the purposes of radio localization, synchronization, and breathing monitoring. Most of those systems measure the radio channel via the received signal strength indicator (RSSI), which is widely available on inexpensive radio transceivers. However, the use of standard RSSI imposes multiple limitations on the accuracy and reliability of such systems; moreover, higher accuracy is only accessible with very high-cost systems, both in bandwidth and device costs. On the other hand, wireless devices also rely on synchronized notion of time to coordinate tasks (transmit, receive, sleep, etc.), especially in time-based localization systems. Existing solutions use multiple message exchanges to estimate time offset and clock skew, which further increases channel utilization. In this dissertation, the design of the systems that use RSSI for device-free localization, device-based localization, and breathing monitoring applications are evaluated. Next, the design and evaluation of novel wireless embedded systems are introduced to enable more fine-grained radio signal measurements to the application. I design and study the effect of increasing the resolution of RSSI beyond the typical 1 dB step size, which is the current standard, with a couple of example applications: breathing monitoring and gesture recognition. Lastly, the Stitch architecture is then proposed to allow the frequency and time synchronization of multiple nodes' clocks. The prototype platform, Chronos, implements radio frequency synchronization (RFS), which accesses complex baseband samples from a low-power low-cost narrowband radio, estimates the carrier frequency offset, and iteratively drives the difference between two nodes' main local oscillators (LO) to less than 3 parts per billion (ppb). An optimized time synchronization and ranging protocols (EffToF) is designed and implemented to achieve the same timing accuracy as the state-of-the-art but with 59% less utilization of the UWB channel. Based on this dissertation, I could foresee Stitch and RFS further improving the robustness of communications infrastructure to GPS jamming, allow exploration of applications such as distributed beamforming and MIMO, and enable new highly-synchronous wireless sensing and actuation systems
    • …
    corecore