284 research outputs found

    Secure entity authentication

    Get PDF
    According to Wikipedia, authentication is the act of confirming the truth of an attribute of a single piece of a datum claimed true by an entity. Specifically, entity authentication is the process by which an agent in a distributed system gains confidence in the identity of a communicating partner (Bellare et al.). Legacy password authentication is still the most popular one, however, it suffers from many limitations, such as hacking through social engineering techniques, dictionary attack or database leak. To address the security concerns in legacy password-based authentication, many new authentication factors are introduced, such as PINs (Personal Identification Numbers) delivered through out-of-band channels, human biometrics and hardware tokens. However, each of these authentication factors has its own inherent weaknesses and security limitations. For example, phishing is still effective even when using out-of-band-channels to deliver PINs (Personal Identification Numbers). In this dissertation, three types of secure entity authentication schemes are developed to alleviate the weaknesses and limitations of existing authentication mechanisms: (1) End user authentication scheme based on Network Round-Trip Time (NRTT) to complement location based authentication mechanisms; (2) Apache Hadoop authentication mechanism based on Trusted Platform Module (TPM) technology; and (3) Web server authentication mechanism for phishing detection with a new detection factor NRTT. In the first work, a new authentication factor based on NRTT is presented. Two research challenges (i.e., the secure measurement of NRTT and the network instabilities) are addressed to show that NRTT can be used to uniquely and securely identify login locations and hence can support location-based web authentication mechanisms. The experiments and analysis show that NRTT has superior usability, deploy-ability, security, and performance properties compared to the state-of-the-art web authentication factors. In the second work, departing from the Kerb eros-centric approach, an authentication framework for Hadoop that utilizes Trusted Platform Module (TPM) technology is proposed. It is proven that pushing the security down to the hardware level in conjunction with software techniques provides better protection over software only solutions. The proposed approach provides significant security guarantees against insider threats, which manipulate the execution environment without the consent of legitimate clients. Extensive experiments are conducted to validate the performance and the security properties of the proposed approach. Moreover, the correctness and the security guarantees are formally proved via Burrows-Abadi-Needham (BAN) logic. In the third work, together with a phishing victim identification algorithm, NRTT is used as a new phishing detection feature to improve the detection accuracy of existing phishing detection approaches. The state-of-art phishing detection methods fall into two categories: heuristics and blacklist. The experiments show that the combination of NRTT with existing heuristics can improve the overall detection accuracy while maintaining a low false positive rate. In the future, to develop a more robust and efficient phishing detection scheme, it is paramount for phishing detection approaches to carefully select the features that strike the right balance between detection accuracy and robustness in the face of potential manipulations. In addition, leveraging Deep Learning (DL) algorithms to improve the performance of phishing detection schemes could be a viable alternative to traditional machine learning algorithms (e.g., SVM, LR), especially when handling complex and large scale datasets

    The Queen's Guard: A Secure Enforcement of Fine-grained Access Control In Distributed Data Analytics Platforms

    Full text link
    Distributed data analytics platforms (i.e., Apache Spark, Hadoop) provide high-level APIs to programmatically write analytics tasks that are run distributedly in multiple computing nodes. The design of these frameworks was primarily motivated by performance and usability. Thus, the security takes a back seat. Consequently, they do not inherently support fine-grained access control or offer any plugin mechanism to enable it, making them risky to be used in multi-tier organizational settings. There have been attempts to build "add-on" solutions to enable fine-grained access control for distributed data analytics platforms. In this paper, first, we show that straightforward enforcement of ``add-on'' access control is insecure under adversarial code execution. Specifically, we show that an attacker can abuse platform-provided APIs to evade access controls without leaving any traces. Second, we designed a two-layered (i.e., proactive and reactive) defense system to protect against API abuses. On submission of a user code, our proactive security layer statically screens it to find potential attack signatures prior to its execution. The reactive security layer employs code instrumentation-based runtime checks and sandboxed execution to throttle any exploits at runtime. Next, we propose a new fine-grained access control framework with an enhanced policy language that supports map and filter primitives. Finally, we build a system named SecureDL with our new access control framework and defense system on top of Apache Spark, which ensures secure access control policy enforcement under adversaries capable of executing code. To the best of our knowledge, this is the first fine-grained attribute-based access control framework for distributed data analytics platforms that is secure against platform API abuse attacks. Performance evaluation showed that the overhead due to added security is low

    Video Forensics in Cloud Computing: The Challenges & Recommendations

    Get PDF
    Forensic analysis of large video surveillance datasets requires computationally demanding processing and significant storage space. The current standalone and often dedicated computing infrastructure used for the purpose is rather limited due to practical limits of hardware scalability and the associated cost. Recently Cloud Computing has emerged as a viable solution to computing resource limitations, taking full advantage of virtualisation capabilities and distributed computing technologies. Consequently the opportunities provided by cloud computing service to support the requirements of forensic video surveillance systems have been recently studied in literature. However such studies have been limited to very simple video analytic tasks carried out within a cloud based architecture. The requirements of a larger scale video forensic system are significantly more and demand an in-depth study. Especially there is a need to balance the benefits of cloud computing with the potential risks of security and privacy breaches of the video data. Understanding different legal issues involved in deploying video surveillance in cloud computing will help making the proposed security architecture affective against potential threats and hence lawful. In this work we conduct a literature review to understand the current regulations and guidelines behind establishing a trustworthy, cloud based video surveillance system. In particular we discuss the requirements of a legally acceptable video forensic system, study the current security and privacy challenges of cloud based computing systems and make recommendations for the design of a cloud based video forensic system

    Video forensics in cloud computing: the challenges & recommendations

    Get PDF
    Forensic analysis of large video surveillance datasets requires computationally demanding processing and significant storage space. The current standalone and often dedicated computing infrastructure used for the purpose is rather limited due to practical limits of hardware scalability and the associated cost. Recently Cloud Computing has emerged as a viable solution to computing resource limitations, taking full advantage of virtualisation capabilities and distributed computing technologies. Consequently the opportunities provided by cloud computing service to support the requirements of forensic video surveillance systems have been recently studied in literature. However such studies have been limited to very simple video analytic tasks carried out within a cloud based architecture. The requirements of a larger scale video forensic system are significantly more and demand an in-depth study. Especially there is a need to balance the benefits of cloud computing with the potential risks of security and privacy breaches of the video data. Understanding different legal issues involved in deploying video surveillance in cloud computing will help making the proposed security architecture affective against potential threats and hence lawful. In this work we conduct a literature review to understand the current regulations and guidelines behind establishing a trustworthy, cloud based video surveillance system. In particular we discuss the requirements of a legally acceptable video forensic system, study the current security and privacy challenges of cloud based computing systems and make recommendations for the design of a cloud based video forensic system

    Big Data SAVE: Secure Anonymous Vault Environment

    Get PDF
    There has been great progress in taming the volume, velocity and variation of Big Data. Its volume creates need for increased storage space and improved data handling. Its velocity is concern for the speed and efficiency of applied algorithms and processes. Its variation requires flexibility to handle assorted data-types. However, as with many emerging fields, security has taken a backseat to benchmarks. This has led to retrofitting traditional security techniques ill-suited for Big Data protection, or high-performance setups exposed to data breach. Proposed is an innovative storage system that can provide large-scale, low-overhead data security, akin to safe-deposit boxes. This approach allows for anonymously-shared storage space, discrete levels of access, plausible deniability, and customizable degrees of overall protection (including warrant-proof). A promising factor of this new model is the use of a simple encryption algorithm (proven faster than industry-standard ciphers), that provides inherent attack resiliency and strong backward secrecy

    Unknown Threat Detection With Honeypot Ensemble Analsyis Using Big Datasecurity Architecture

    Get PDF
    The amount of data that is being generated continues to rapidly grow in size and complexity. Frameworks such as Apache Hadoop and Apache Spark are evolving at a rapid rate as organizations are building data driven applications to gain competitive advantages. Data analytics frameworks decomposes our problems to build applications that are more than just inference and can help make predictions as well as prescriptions to problems in real time instead of batch processes. Information Security is becoming more important to organizations as the Internet and cloud technologies become more integrated with their internal processes. The number of attacks and attack vectors has been increasing steadily over the years. Border defense measures (e.g. Intrusion Detection Systems) are no longer enough to identify and stop attackers. Data driven information security is not a new approach to solving information security; however there is an increased emphasis on combining heterogeneous sources to gain a broader view of the problem instead of isolated systems. Stitching together multiple alerts into a cohesive system can increase the number of True Positives. With the increased concern of unknown insider threats and zero-day attacks, identifying unknown attack vectors becomes more difficult. Previous research has shown that with as little as 10 commands it is possible to identify a masquerade attack against a user\u27s profile. This thesis is going to look at a data driven information security architecture that relies on both behavioral analysis of SSH profiles and bad actor data collected from an SSH honeypot to identify bad actor attack vectors. Honeypots should collect only data from bad actors; therefore have a high True Positive rate. Using Apache Spark and Apache Hadoop we can create a real time data driven architecture that can collect and analyze new bad actor behaviors from honeypot data and monitor legitimate user accounts to create predictive and prescriptive models. Previously unidentified attack vectors can be cataloged for review
    corecore