376 research outputs found

    Deterministic meeting of sniffing agents in the plane

    Full text link
    Two mobile agents, starting at arbitrary, possibly different times from arbitrary locations in the plane, have to meet. Agents are modeled as discs of diameter 1, and meeting occurs when these discs touch. Agents have different labels which are integers from the set of 0 to L-1. Each agent knows L and knows its own label, but not the label of the other agent. Agents are equipped with compasses and have synchronized clocks. They make a series of moves. Each move specifies the direction and the duration of moving. This includes a null move which consists in staying inert for some time, or forever. In a non-null move agents travel at the same constant speed, normalized to 1. We assume that agents have sensors enabling them to estimate the distance from the other agent (defined as the distance between centers of discs), but not the direction towards it. We consider two models of estimation. In both models an agent reads its sensor at the moment of its appearance in the plane and then at the end of each move. This reading (together with the previous ones) determines the decision concerning the next move. In both models the reading of the sensor tells the agent if the other agent is already present. Moreover, in the monotone model, each agent can find out, for any two readings in moments t1 and t2, whether the distance from the other agent at time t1 was smaller, equal or larger than at time t2. In the weaker binary model, each agent can find out, at any reading, whether it is at distance less than \r{ho} or at distance at least \r{ho} from the other agent, for some real \r{ho} > 1 unknown to them. Such distance estimation mechanism can be implemented, e.g., using chemical sensors. Each agent emits some chemical substance (scent), and the sensor of the other agent detects it, i.e., sniffs. The intensity of the scent decreases with the distance.Comment: A preliminary version of this paper appeared in the Proc. 23rd International Colloquium on Structural Information and Communication Complexity (SIROCCO 2016), LNCS 998

    Ultra-reliable Low-latency, Energy-efficient and Computing-centric Software Data Plane for Network Softwarization

    Get PDF
    Network softwarization plays a significantly important role in the development and deployment of the latest communication system for 5G and beyond. A more flexible and intelligent network architecture can be enabled to provide support for agile network management, rapid launch of innovative network services with much reduction in Capital Expense (CAPEX) and Operating Expense (OPEX). Despite these benefits, 5G system also raises unprecedented challenges as emerging machine-to-machine and human-to-machine communication use cases require Ultra-Reliable Low Latency Communication (URLLC). According to empirical measurements performed by the author of this dissertation on a practical testbed, State of the Art (STOA) technologies and systems are not able to achieve the one millisecond end-to-end latency requirement of the 5G standard on Commercial Off-The-Shelf (COTS) servers. This dissertation performs a comprehensive introduction to three innovative approaches that can be used to improve different aspects of the current software-driven network data plane. All three approaches are carefully designed, professionally implemented and rigorously evaluated. According to the measurement results, these novel approaches put forward the research in the design and implementation of ultra-reliable low-latency, energy-efficient and computing-first software data plane for 5G communication system and beyond

    Studies into Amphetamine-Induced Unconditioned Behaviour in the Rat.

    Get PDF
    Previous work on the unconditioned effects of amphetamine in rats has examined qualitative changes in behaviours which become stereotyped and quantitative changes in locomotion. Stereotyped behaviours have been adopted as a model of raised caudate-putameri function whilst locomotion has been adopted as a model of raised mesolimbic dopamine function. These models have been used to study drugs which are effective in the treatment of schizophrenia. Only locomotion is reliably antagonised by all classes of antipsychotic drugs, although it has been hypothesised that, under some doses of amphetamine, locomotion may also become stereotyped. The Lyon-Robbins hypothesis of the behavioural effects of amphetamine predicts competition between the output of the mesolimbic and caudate-putamen, and would predict that stereotyped locomotion represents a 'blending' of mesolimbic and caudate-putamen behavioural output. An experiment was conducted to test the Lyon-Robbins hypothesis using contrast-based image analysis to determine the spatio-temporal characteristics of open-field locomotion. A further four experiments examined the effects of a classic antipsychotic (haloperidol), the atypical antipsychotics (clozapine and sulpiride) and a putative antipsychotic (a 5-HT3 antagonist, ondansetron) on open-field locomotor routes taken by rats following treatment with 3.5mg/kg amphetamine. Measures of stereotyped locomotion derived from image analysis were supported by a novel form of behavioural analysis based on multi-dimensional scaling which provided an integrated analysis of behavioural change following drug treatment. Haloperidol blocked locomotion and stereotyped behaviours including stereotyped locomotion, whereas clozapine, sulpiride and ondansetron blocked locomotion but not stereotyped locomotion and in some cases increased stereotyped behaviours. This suggests that stereotyped locomotion represents synergistic functioning of both mesolimbic and caudate-putamen systems, when the output from the caudate-putamen is insufficient to over-ride that of the mesolimbic system. Antagonism of a 5-HT3 enhancement of mesolimbic locomotor activity by ondansetron allowed latent 5-HT and dopamine mediated behaviours to be expressed. This effectively mimicked a leftwards shift of the amphetamine dose response curve, hypothesised as amplification of the caudate-putamen output. These findings lend support to the Lyon-Robbins hypothesis of the behavioural effects of amphetamine

    Aeronautical Engineering: A continuing bibliography with indexes (supplement 207)

    Get PDF
    This bibliography lists 484 reports, articles and other documents introduced into the NASA scientific and technical information system in November 1986

    Decentralized and adaptive sensor data routing

    Get PDF
    Wireless sensor network (WSN) has been attracting research efforts due to the rapidly increasing applications in military and civilian fields. An important issue in wireless sensor network is how to send information in an efficient and adaptive way. Information can be directly sent back to the base station or through a sequence of intermediate nodes. In the later case, it becomes the problem of routing. Current routing protocols can be categorized into two groups, namely table-drive (proactive) routing protocols and source-initiated on-demand (reactive) routing. For ad hoc wireless sensor network, routing protocols must deal with some unique constraints such as energy conservation, low bandwidth, high error rate and unpredictable topology, of which wired network might not possess. Thus, a routing protocol, which is energy efficient, self-adaptive and error tolerant is highly demanded. A new peer to peer (P2P) routing notion based on the theory of cellular automata has been put forward to solve this problem. We proposed two different models, namely Spin Glass (Physics) inspired model and Multi-fractal (Chemistry) inspired model. Our new routing models are distributed in computation and self-adaptive to topological disturbance. All these merits can not only save significant amount of communication and computation cost but also well adapt to the highly volatile environment of ad hoc WSN. With the cellular automata Cantor modeling tool, we implemented two dynamic link libraries (DLL) in C++ and the corresponding graphic display procedures in Tcl/tk. Results of each model’s routing ability are discussed and hopefully it will lead to new peer to peer algorithms, which can combine the advantages of current models

    Active Response Using Host-Based Intrusion Detection System and Software-Defined Networking

    Get PDF
    This research proposes AHNSR: Active Host-based Network Security Response by utilizing Host-based Intrusion Detection Systems (HIDS) with Software-Defined Networking (SDN) to enhance system security by allowing dynamic active response and reconstruction from a global network topology perspective. Responses include traffic redirection, host quarantining, filtering, and more. A testable SDN-controlled network is constructed with multiple hosts, OpenFlow enabled switches, and a Floodlight controller, all linked to a custom, novel interface for the Open-Source SECurity (OSSEC) HIDS framework. OSSEC is implemented in a server-agent architecture, allowing scalability and OS independence. System effectiveness is evaluated against the following factors: alert density and a selective Floodlight module response types. At the expected operational load of 500 events per second (EPS), results reveal a mean system response time of 0.5564 seconds from log generation to flow table update via Floodlights Access Control List module. Load testing further assesses performance at 10 - 10000 EPS for all tested response modules

    Double-edged Sword: An in-depth analysis of browser fingerprints for web tracking and bot defenses

    Get PDF
    The use of browser fingerprints is a double-edged sword. While they can protect users' security by defending against bot and fraud attacks, they can also be used for web tracking, which violates individual users' privacy. While acknowledging that the use of browser fingerprints is a double-edged sword, my work is to achieve a balance that maximizes the benefits and minimizes the risks. In the following section, I provide a comprehensive examination of browser fingerprints and demonstrates how they can be improved to better protect users’ privacy while still providing a valuable tool in defending against bot and fraud attacks

    Towards a framework for the implementation of a secure quantum teleportation infrastructure in South Africa

    Get PDF
    Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2019The availability of high-speed/high-volume Data Link Layer (Layer 2) transmission networks fuelled by the implementation of mission critical and performance-intensive technologies, such as Cloud and Data Centre services transmitting sensitive data over the wide area network (WAN) has shifted the attention of hackers, eavesdroppers, cyber-criminals and other malicious attackers to the exploitation of these data transmission technologies. It is argued that security on the current classical technologies that store, transmit and manipulate information on the OSI Layer 2 have historically not been adequately addressed when it comes to secure communication and exchange of information. Quantum teleportation (QT) stemming from quantum communication a branch of quantum information science (QIS) has emerged as a technology that promise unconditional security and providing new ways to design and develop frameworks that operate based on the laws of quantum physics. It is argued that it has a potential to address the data transmission security GAP for OSI layer 2 technologies. This research study aims to propose a framework for the implementation of secure quantum teleportation infrastructures in South Africa. There is currently a lack of generic models and methods to guide the implementation of QT infrastructures that will enable secure transmission of information. A design science research (DSR) was undertaken in order to develop a secure quantum teleportation artefact called (SecureQT-Framework). SecureQT-Framework is a generic model and method that guides the selection and implementation of QT infrastructures motivated by multi-disciplinary domains such as QIS, Quantum Physics, Computer Science as well as information and communication technology (ICT). The DSR process employed a primary DSR cycle with four DSR sub-cycles which involved the awareness and suggestion phase guided by a systematic literature review (SLR), development and evaluation phase guided by Software Defined Network’s OpenFlow, Mininet, Mininet-Wifi and computer simulations for QT using SQUANCH framework. We investigated, examined and collected credible QT techniques and its variant protocols to develop and simulate secure transmission of information over the WAN, We studied their features and challenges. We concluded the study by describing the QT techniques, protocols and implementations that has potential to bridge the security GAP for OSI Layer 2 technologies over the WAN. The results gained were used in the construction of a framework for the implementation of a secure quantum teleportation infrastructure in South Africa. The framework describes the main factors that need to be taken into consideration when implementing quantum teleportation infrastructures

    Deep learning : enhancing the security of software-defined networks

    Get PDF
    Software-defined networking (SDN) is a communication paradigm that promotes network flexibility and programmability by separating the control plane from the data plane. SDN consolidates the logic of network devices into a single entity known as the controller. SDN raises significant security challenges related to its architecture and associated characteristics such as programmability and centralisation. Notably, security flaws pose a risk to controller integrity, confidentiality and availability. The SDN model introduces separation of the forwarding and control planes. It detaches the control logic from switching and routing devices, forming a central plane or network controller that facilitates communications between applications and devices. The architecture enhances network resilience, simplifies management procedures and supports network policy enforcement. However, it is vulnerable to new attack vectors that can target the controller. Current security solutions rely on traditional measures such as firewalls or intrusion detection systems (IDS). An IDS can use two different approaches: signature-based or anomaly-based detection. The signature-based approach is incapable of detecting zero-day attacks, while anomaly-based detection has high false-positive and false-negative alarm rates. Inaccuracies related to false-positive attacks may have significant consequences, specifically from threats that target the controller. Thus, improving the accuracy of the IDS will enhance controller security and, subsequently, SDN security. A centralised network entity that controls the entire network is a primary target for intruders. The controller is located at a central point between the applications and the data plane and has two interfaces for plane communications, known as northbound and southbound, respectively. Communications between the controller, the application and data planes are prone to various types of attacks, such as eavesdropping and tampering. The controller software is vulnerable to attacks such as buffer and stack overflow, which enable remote code execution that can result in attackers taking control of the entire network. Additionally, traditional network attacks are more destructive. This thesis introduces a threat detection approach aimed at improving the accuracy and efficiency of the IDS, which is essential for controller security. To evaluate the effectiveness of the proposed framework, an empirical study of SDN controller security was conducted to identify, formalise and quantify security concerns related to SDN architecture. The study explored the threats related to SDN architecture, specifically threats originating from the existence of the control plane. The framework comprises two stages, involving the use of deep learning (DL) algorithms and clustering algorithms, respectively. DL algorithms were used to reduce the dimensionality of inputs, which were forwarded to clustering algorithms in the second stage. Features were compressed to a single value, simplifying and improving the performance of the clustering algorithm. Rather than using the output of the neural network, the framework presented a unique technique for dimensionality reduction that used a single value—reconstruction error—for the entire input record. The use of a DL algorithm in the pre-training stage contributed to solving the problem of dimensionality related to k-means clustering. Using unsupervised algorithms facilitated the discovery of new attacks. Further, this study compares generative energy-based models (restricted Boltzmann machines) with non-probabilistic models (autoencoders). The study implements TensorFlow in four scenarios. Simulation results were statistically analysed using a confusion matrix, which was evaluated and compared with similar related works. The proposed framework, which was adapted from existing similar approaches, resulted in promising outcomes and may provide a robust prospect for deployment in modern threat detection systems in SDN. The framework was implemented using TensorFlow and was benchmarked to the KDD99 dataset. Simulation results showed that the use of the DL algorithm to reduce dimensionality significantly improved detection accuracy and reduced false-positive and false-negative alarm rates. Extensive simulation studies on benchmark tasks demonstrated that the proposed framework consistently outperforms all competing approaches. This improvement is a further step towards the development of a reliable IDS to enhance the security of SDN controllers
    corecore