5,210 research outputs found

    Towards Semantic Integration of Heterogeneous Sensor Data with Indigenous Knowledge for Drought Forecasting

    Full text link
    In the Internet of Things (IoT) domain, various heterogeneous ubiquitous devices would be able to connect and communicate with each other seamlessly, irrespective of the domain. Semantic representation of data through detailed standardized annotation has shown to improve the integration of the interconnected heterogeneous devices. However, the semantic representation of these heterogeneous data sources for environmental monitoring systems is not yet well supported. To achieve the maximum benefits of IoT for drought forecasting, a dedicated semantic middleware solution is required. This research proposes a middleware that semantically represents and integrates heterogeneous data sources with indigenous knowledge based on a unified ontology for an accurate IoT-based drought early warning system (DEWS).Comment: 5 pages, 3 figures, In Proceedings of the Doctoral Symposium of the 16th International Middleware Conference (Middleware Doct Symposium 2015), Ivan Beschastnikh and Wouter Joosen (Eds.). ACM, New York, NY, US

    Sharing Computer Network Logs for Security and Privacy: A Motivation for New Methodologies of Anonymization

    Full text link
    Logs are one of the most fundamental resources to any security professional. It is widely recognized by the government and industry that it is both beneficial and desirable to share logs for the purpose of security research. However, the sharing is not happening or not to the degree or magnitude that is desired. Organizations are reluctant to share logs because of the risk of exposing sensitive information to potential attackers. We believe this reluctance remains high because current anonymization techniques are weak and one-size-fits-all--or better put, one size tries to fit all. We must develop standards and make anonymization available at varying levels, striking a balance between privacy and utility. Organizations have different needs and trust other organizations to different degrees. They must be able to map multiple anonymization levels with defined risks to the trust levels they share with (would-be) receivers. It is not until there are industry standards for multiple levels of anonymization that we will be able to move forward and achieve the goal of widespread sharing of logs for security researchers.Comment: 17 pages, 1 figur

    Reducing risky security behaviours:utilising affective feedback to educate users

    Get PDF
    Despite the number of tools created to help end-users reduce risky security behaviours, users are still falling victim to online attacks. This paper proposes a browser extension utilising affective feedback to provide warnings on detection of risky behaviour. The paper provides an overview of behaviour considered to be risky, explaining potential threats users may face online. Existing tools developed to reduce risky security behaviours in end-users have been compared, discussing the success rate of various methodologies. Ongoing research is described which attempts to educate users regarding the risks and consequences of poor security behaviour by providing the appropriate feedback on the automatic recognition of risky behaviour. The paper concludes that a solution utilising a browser extension is a suitable method of monitoring potentially risky security behaviour. Ultimately, future work seeks to implement an affective feedback mechanism within the browser extension with the aim of improving security awareness

    Wie repräsentativ sind die Messdaten eines Honeynet?

    Get PDF
    Zur Früherkennung von kritischen Netzphänomenen wurden in der Vergangenheit viele Arten von verteilten Sensornetze im Internet etabliert und erforscht. Wir betrachten das Phänomen Verteilung von bösartiger Software im Netz'', das punktuell etwa mit dem InMAS-Sensorsystem gemessen werden kann. Unklar war jedoch immer die Frage, wie repräsentativ die Daten sind, die durch ein solches Sensornetz gesammelt werden. In diesem Dokument wird ein methodisches Rahmenwerk beschrieben, mit dem Maßzahlen der Repräsentativität an Messungen von Malware-Sensornetzen geheftet werden können. Als methodischer Ansatz wurden Techniken der empirischen Sozialforschung verwendet. Als Ergebnis ist festzuhalten, dass ein Sensornetz mit mindestens 100 zufällig über den Netzbereich verteilten Sensoren notwendig erscheint, um überhaupt belastbare Aussagen über die Repräsentativität von Sensornetz-Messungen machen zu können

    Collaborative internet worm containment

    Get PDF
    Large-scale worm outbrakes that leads to distributed denial-of-dervice attacks pose a major threat to internet infrastructure security. To prevent computers from such attacks deployment of fast, scalable security overlay networks based on distributed hash tables to facilitate high-speed intrusion detection and alert-information exchange are proposed. An effective system for worm detection and cyberspace defence must have robustness, cooperation among multiple sites, responsiveness to unexpected worms and efficiency and scalability. Deployment of collaborative WormShield monitors on just 1 percent of the vulnerable edge networks can detect worm signatures roughly 10 times faster than with independent monitors.published_or_final_versio

    An Innovative Signature Detection System for Polymorphic and Monomorphic Internet Worms Detection and Containment

    Get PDF
    Most current anti-worm systems and intrusion-detection systems use signature-based technology instead of anomaly-based technology. Signature-based technology can only detect known attacks with identified signatures. Existing anti-worm systems cannot detect unknown Internet scanning worms automatically because these systems do not depend upon worm behaviour but upon the worm’s signature. Most detection algorithms used in current detection systems target only monomorphic worm payloads and offer no defence against polymorphic worms, which changes the payload dynamically. Anomaly detection systems can detect unknown worms but usually suffer from a high false alarm rate. Detecting unknown worms is challenging, and the worm defence must be automated because worms spread quickly and can flood the Internet in a short time. This research proposes an accurate, robust and fast technique to detect and contain Internet worms (monomorphic and polymorphic). The detection technique uses specific failure connection statuses on specific protocols such as UDP, TCP, ICMP, TCP slow scanning and stealth scanning as characteristics of the worms. Whereas the containment utilizes flags and labels of the segment header and the source and destination ports to generate the traffic signature of the worms. Experiments using eight different worms (monomorphic and polymorphic) in a testbed environment were conducted to verify the performance of the proposed technique. The experiment results showed that the proposed technique could detect stealth scanning up to 30 times faster than the technique proposed by another researcher and had no false-positive alarms for all scanning detection cases. The experiments showed the proposed technique was capable of containing the worm because of the traffic signature’s uniqueness

    Feature trade-off analysis for reconnaissance detection.

    Get PDF
    An effective cyber early warning system (CEWS) should pick up threat activity at an early stage, with an emphasis on establishing hypotheses and predictions as well as generating alerts on (unclassified) situations based on preliminary indications. The design and implementation of such early warning systems involve numerous challenges such as generic set of indicators, intelligence gathering, uncertainty reasoning and information fusion. This chapter begins with an understanding of the behaviours of intruders and then related literature is followed by the proposed methodology using a Bayesian inference-based system. It also includes a carefully deployed empirical analysis on a data set labelled for reconnaissance activity. Finally, the chapter concludes with a discussion on results, research challenges and necessary suggestions to move forward in this research line

    Exploratory study to explore the role of ICT in the process of knowledge management in an Indian business environment

    Get PDF
    In the 21st century and the emergence of a digital economy, knowledge and the knowledge base economy are rapidly growing. To effectively be able to understand the processes involved in the creating, managing and sharing of knowledge management in the business environment is critical to the success of an organization. This study builds on the previous research of the authors on the enablers of knowledge management by identifying the relationship between the enablers of knowledge management and the role played by information communication technologies (ICT) and ICT infrastructure in a business setting. This paper provides the findings of a survey collected from the four major Indian cities (Chennai, Coimbatore, Madurai and Villupuram) regarding their views and opinions about the enablers of knowledge management in business setting. A total of 80 organizations participated in the study with 100 participants in each city. The results show that ICT and ICT infrastructure can play a critical role in the creating, managing and sharing of knowledge in an Indian business environment

    Shadow Honeypots

    Get PDF
    We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network or service. Traffic that is considered anomalous is processed by a "shadow honeypot" to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular ("production") instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20% for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives
    • …
    corecore