1,235 research outputs found

    Gravity Spy: Integrating Advanced LIGO Detector Characterization, Machine Learning, and Citizen Science

    Get PDF
    (abridged for arXiv) With the first direct detection of gravitational waves, the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) has initiated a new field of astronomy by providing an alternate means of sensing the universe. The extreme sensitivity required to make such detections is achieved through exquisite isolation of all sensitive components of LIGO from non-gravitational-wave disturbances. Nonetheless, LIGO is still susceptible to a variety of instrumental and environmental sources of noise that contaminate the data. Of particular concern are noise features known as glitches, which are transient and non-Gaussian in their nature, and occur at a high enough rate so that accidental coincidence between the two LIGO detectors is non-negligible. In this paper we describe an innovative project that combines crowdsourcing with machine learning to aid in the challenging task of categorizing all of the glitches recorded by the LIGO detectors. Through the Zooniverse platform, we engage and recruit volunteers from the public to categorize images of glitches into pre-identified morphological classes and to discover new classes that appear as the detectors evolve. In addition, machine learning algorithms are used to categorize images after being trained on human-classified examples of the morphological classes. Leveraging the strengths of both classification methods, we create a combined method with the aim of improving the efficiency and accuracy of each individual classifier. The resulting classification and characterization should help LIGO scientists to identify causes of glitches and subsequently eliminate them from the data or the detector entirely, thereby improving the rate and accuracy of gravitational-wave observations. We demonstrate these methods using a small subset of data from LIGO's first observing run.Comment: 27 pages, 8 figures, 1 tabl

    Assessing the Competing Characteristics of Privacy and Safety within Vehicular Ad Hoc Networks

    Get PDF
    The introduction of Vehicle-to-Vehicle (V2V) communication has the promise of decreasing vehicle collisions, congestion, and emissions. However, this technology places safety and privacy at odds; an increase of safety applications will likely result in the decrease of consumer privacy. The National Highway Traffic Safety Administration (NHTSA) has proposed the Security Credential Management System (SCMS) as the back end infrastructure for maintaining, distributing, and revoking vehicle certificates attached to every Basic Safety Message (BSM). This Public Key Infrastructure (PKI) scheme is designed around the philosophy of maintaining user privacy through the separation of functions to prevent any one subcomponent from identifying users. However, because of the high precision of the data elements within each message this design cannot prevent large scale third-party BSM collection and pseudonym linking resulting in privacy loss. In addition, this philosophy creates an extraordinarily complex and heavily distributed system. In response to this difficulty, this thesis proposes a data ambiguity method to bridge privacy and safety within the context of interconnected vehicles. The objective in doing so is to preserve both Vehicle-to-Vehicle (V2V) safety applications and consumer privacy. A Vehicular Ad-Hoc Network (VANET) metric classification is introduced that explores five fundamental pillars of VANETs. These pillars (Safety, Privacy, Cost, Efficiency, Stability) are applied to four different systems: Non-V2V environment, the aforementioned SCMS, the group-pseudonym based Vehicle Based Security System (VBSS), and VBSS with Dithering (VBSS-D) which includes the data ambiguity method of dithering. By using these evaluation criteria, the advantages and disadvantages of bringing each system to fruition is showcased

    Deep Q-Learning on Internet of Things System for Trust Management in Multi-Agent Environments for Smart City

    Get PDF
    Smart Cities are vital to improving urban efficiency and citizen quality of life due to the fast rise of the Internet of Things (IoT) and its integration into varied applications. Smart Cities are dynamic and complicated, making trust management in multi-agent systems difficult. Trust helps IoT devices and agents in smart ecosystems connect and cooperate. This study suggests using Deep Q-Learning and Bidirectional Long Short-Term Memory (Bi-LSTM) to manage trust in multi-agent Smart City settings. Deep Q-Learning and Bi-LSTM represent long-term relationships and temporal dynamics in the IoT network, enabling intelligent trust-related judgments. The architecture supports real-time trust assessment, decision-making, and response to smart city changes. The suggested solution improves dependability, security, and trustworthiness in the IoT system's networked agents. A complete collection of studies utilizing real-world IoT data from a simulated Smart City evaluates the system's performance. The Deep Q-Learning and Bi-LSTM technique surpasses existing trust management approaches in dynamic, complicated multi-agent environments. The system's capacity to adapt to changing situations and improve decision-making make IoT device interactions more dependable and trustworthy, helping Smart Cities expand sustainably and efficiently

    Sarp Net: A Secure, Anonymous, Reputation-Based, Peer-To-Peer Network

    Get PDF
    Since the advent of Napster, the idea of peer-to-peer (P2P) architectures being applied to file-sharing applications has become popular, spawning other P2P networks like Gnutella, Morpheus, Kazaa, and BitTorrent. This growth in P2P development has nearly eradicated the idea of the traditional client-server structure in the file-sharing model, now placing emphasizes on faster query processing, deeper levels of decentralism, and methods to protect against copyright law violation. SARP Net is a secure, anonymous, decentralized, P2P overlay network that is designed to protect the activity of its users in its own file-sharing community. It is secure in the fact that public-key encryption is used to guard eavesdroppers during messages. The protocol guarantees user anonymity by incorporating message hopping from node to node to prevent any network observer from pinpointing the origin of any file query or shared-file source. To further enhance the system\u27s security, a reputation scheme is incorporated to police nodes from malicious activity, maintain the overlay\u27s topology, and enforce rules to protect node identity

    A systematic survey of online data mining technology intended for law enforcement

    Get PDF
    As an increasing amount of crime takes on a digital aspect, law enforcement bodies must tackle an online environment generating huge volumes of data. With manual inspections becoming increasingly infeasible, law enforcement bodies are optimising online investigations through data-mining technologies. Such technologies must be well designed and rigorously grounded, yet no survey of the online data-mining literature exists which examines their techniques, applications and rigour. This article remedies this gap through a systematic mapping study describing online data-mining literature which visibly targets law enforcement applications, using evidence-based practices in survey making to produce a replicable analysis which can be methodologically examined for deficiencies

    A Novel Cooperative Intrusion Detection System for Mobile Ad Hoc Networks

    Get PDF
    Mobile ad hoc networks (MANETs) have experienced rapid growth in their use for various military, medical, and commercial scenarios. This is due to their dynamic nature that enables the deployment of such networks, in any target environment, without the need for a pre-existing infrastructure. On the other hand, the unique characteristics of MANETs, such as the lack of central networking points, limited wireless range, and constrained resources, have made the quest for securing such networks a challenging task. A large number of studies have focused on intrusion detection systems (IDSs) as a solid line of defense against various attacks targeting the vulnerable nature of MANETs. Since cooperation between nodes is mandatory to detect complex attacks in real time, various solutions have been proposed to provide cooperative IDSs (CIDSs) in efforts to improve detection efficiency. However, all of these solutions suffer from high rates of false alarms, and they violate the constrained-bandwidth nature of MANETs. To overcome these two problems, this research presented a novel CIDS utilizing the concept of social communities and the Dempster-Shafer theory (DST) of evidence. The concept of social communities was intended to establish reliable cooperative detection reporting while consuming minimal bandwidth. On the other hand, DST targeted decreasing false accusations through honoring partial/lack of evidence obtained solely from reliable sources. Experimental evaluation of the proposed CIDS resulted in consistently high detection rates, low false alarms rates, and low bandwidth consumption. The results of this research demonstrated the viability of applying the social communities concept combined with DST in achieving high detection accuracy and minimized bandwidth consumption throughout the detection process

    Light-Weight Accountable Privacy Preserving Protocol in Cloud Computing Based on a Third-Party Auditor

    Get PDF
    Cloud computing is emerging as the next disruptive utility paradigm [1]. It provides extensive storage capabilities and an environment for application developers through virtual machines. It is also the home of software and databases that are accessible, on-demand. Cloud computing has drastically transformed the way organizations, and individual consumers access and interact with Information Technology. Despite significant advancements in this technology, concerns about security are holding back businesses from fully adopting this promising information technology trend. Third-party auditors (TPAs) are becoming more common in cloud computing implementations. Hence, involving auditors comes with its issues such as trust and processing overhead. To achieve productive auditing, we need to (1) accomplish efficient auditing without requesting the data location or introducing processing overhead to the cloud client; (2) avoid introducing new security vulnerabilities during the auditing process. There are various security models for safeguarding the CCs (Cloud Client) data in the cloud. The TPA systematically examines the evidence of compliance with established security criteria in the connection between the CC and the Cloud Service Provider (CSP). The CSP provides the clients with cloud storage, access to a database coupled with services. Many security models have been elaborated to make the TPA more reliable so that the clients can trust the third-party auditor with their data. Our study shows that involving a TPA might come with its shortcomings, such as trust concerns, extra overhead, security, and data manipulation breaches; as well as additional processing, which leads to the conclusion that a lightweight and secure protocol is paramount to the solution. As defined in [2] privacy-preserving is making sure that the three cloud stakeholders are not involved in any malicious activities coming from insiders at the CSP level, making sure to remediate to TPA vulnerabilities and that the CC is not deceitfully affecting other clients. In our survey phase, we have put into perspective the privacy-preserving solutions as they fit the lightweight requirements in terms of processing and communication costs, ending up by choosing the most prominent ones to compare with them our simulation results. In this dissertation, we introduce a novel method that can detect a dishonest TPA: The Light-weight Accountable Privacy-Preserving (LAPP) Protocol. The lightweight characteristic has been proven simulations as the minor impact of our protocol in terms of processing and communication costs. This protocol determines the malicious behavior of the TPA. To validate our proposed protocol’s effectiveness, we have conducted simulation experiments by using the GreenCloud simulator. Based on our simulation results, we confirm that our proposed model provides better outcomes as compared to the other known contending methods

    Security and Prioritization in Multiple Access Relay Networks

    Get PDF
    In this work, we considered a multiple access relay network and investigated the following three problems: 1- Tradeoff between reliability and security under falsified data injection attacks; 2-Prioritized analog relaying; 3- mitigation of Forwarding Misbehaviors in Multiple access relay network. In the first problem, we consider a multiple access relay network where multiple sources send independent data to a single destination through multiple relays which may inject a falsified data into the network. To detect the malicious relays and discard (erase) data from them, tracing bits are embedded in the information data at each source node. Parity bits may be also added to correct the errors caused by fading and noise. When the total amount of redundancy, tracing bits plus parity bits, is fixed, an increase in parity bits to increase the reliability requires a decrease in tracing bits which leads to a less accurate detection of malicious behavior of relays, and vice versa. We investigate the tradeoff between the tracing bits and the parity bits in minimizing the probability of decoding error and maximizing the throughput in multi-source, multi-relay networks under falsified data injection attacks. The energy and throughput gains provided by the optimal allocation of redundancy and the tradeoff between reliability and security are analyzed. In the second problem, we consider a multiple access relay network where multiple sources send independent data simultaneously to a common destination through multiple relay nodes. We present three prioritized analog cooperative relaying schemes that provide different class of service (CoS) to different sources while being relayed at the same time in the same frequency band. The three schemes take the channel variations into account in determining the relay encoding (combining) rule, but differ in terms of whether or how relays cooperate. Simulation results on the symbol error probability and outage probability are provided to show the effectiveness of the proposed schemes. In the third problem, we propose a physical layer approach to detect the relay node that injects false data or adds channel errors into the network encoder in multiple access relay networks. The misbehaving relay is detected by using the maximum a posteriori (MAP) detection rule which is optimal in the sense of minimizing the probability of incorrect decision (false alarm and miss detection). The proposed scheme does not require sending extra bits at the source, such as hash function or message authentication check bits, and hence there is no transmission overhead. The side information regarding the presence of forwarding misbehavior is exploited at the decoder to enhance the reliability of decoding. We derive the probability of false alarm and miss detection and the probability of bit error, taking into account the lossy nature of wireless links
    • …
    corecore