680 research outputs found

    Identification and Characterization of electrical patterns underlying stereotyped behaviours in the semi-intact leech

    Get PDF
    Neuroscience aims at understanding the mechanisms underlying perception, learning, memory, consciousness and acts. The present Ph.D. thesis aims to elucidate some principles controlling actions, which in a more scientific and technical language is referred to as motor control. This concept has been studied in a variety of preparations in vertebrate and invertebrate species. In this PhD thesis, the leech has been the subject of choice, because it is a well known preparation, highly suitable for relating functional and behavioural properties to the underlying neuronal networks. The semi-intact leech preparation (Kristan et al., 1974) has been the main methodological strategy performed in the experiments. Its importance lies in the fact that it gives the possibility to access the information from the leech\u2019s central nervous system (CNS) and compare simultaneously some stereotyped behaviours. Thus, entering in this work it is necessary to make a brief summary of the steps followed before arriving to the conclusions written ahead. The main objective followed in this work has been the analysis, identification and characterization of electrical patterns underlying different behaviours in Hirudo medicinalis. This main objective has been reached focusing the project on three particular objectives, which have been pursued during the author\u2019s Philosophical Doctorate course

    Who wrote this scientific text?

    No full text
    The IEEE bibliographic database contains a number of proven duplications with indication of the original paper(s) copied. This corpus is used to test a method for the detection of hidden intertextuality (commonly named "plagiarism"). The intertextual distance, combined with the sliding window and with various classification techniques, identifies these duplications with a very low risk of error. These experiments also show that several factors blur the identity of the scientific author, including variable group authorship and the high levels of intertextuality accepted, and sometimes desired, in scientific papers on the same topic

    Advanced monitoring in P2P botnets

    Get PDF
    Botnets are increasingly being held responsible for most of the cybercrimes that occur nowadays. They are used to carry out malicious activities like banking credential theft and Distributed Denial of Service (DDoS) attacks to generate profit for their owner, the botmaster. Traditional botnets utilized centralized and decentralized Command-and-Control Servers (C2s). However, recent botnets have been observed to prefer P2P-based architectures to overcome some of the drawbacks of the earlier architectures. A P2P architecture allows botnets to become more resilient and robust against random node failures and targeted attacks. However, the distributed nature of such botnets requires the defenders, i.e., researchers and law enforcement agencies, to use specialized tools such as crawlers and sensor nodes to monitor them. In return to such monitoring, botmasters have introduced various countermeasures to impede botnet monitoring, e.g., automated blacklisting mechanisms. The presence of anti-monitoring mechanisms not only render any gathered monitoring data to be inaccurate or incomplete, it may also adversely affect the success rate of botnet takedown attempts that rely upon such data. Most of the existing monitoring mechanisms identified from the related works only attempt to tolerate anti-monitoring mechanisms as much as possible, e.g., crawling bots with lower frequency. However, this might also introduce noise into the gathered data, e.g., due to the longer delay for crawling the botnet. This in turn may also reduce the quality of the data. This dissertation addresses most of the major issues associated with monitoring in P2P botnets as described above. Specifically, it analyzes the anti-monitoring mechanisms of three existing P2P botnets: 1) GameOver Zeus, 2)Sality, and 3) ZeroAccess, and proposes countermeasures to circumvent some of them. In addition, this dissertation also proposes several advanced anti-monitoring mechanisms from the perspective of a botmaster to anticipate future advancement of the botnets. This includes a set of lightweight crawler detection mechanisms as well as several novel mechanisms to detect sensor nodes deployed in P2P botnets. To ensure that the defenders do not loose this arms race, this dissertation also includes countermeasures to circumvent the proposed anti-monitoring mechanisms. Finally, this dissertation also investigates if the presence of third party monitoring mechanisms, e.g., sensors, in botnets influences the overall churn measurements. In addition, churn models for Sality and ZeroAccess are also derived using fine-granularity churn measurements. The works proposed in this dissertation have been evaluated using either real-world botnet datasets, i.e., that were gathered using crawlers and sensor nodes, or simulated datasets. Evaluation results indicate that most of the anti-monitoring mechanisms implemented by existing botnets can either be circumvented or tolerated to obtain monitoring data with a better quality. However, many crawlers and sensor nodes in existing botnets are found vulnerable to the antimonitoring mechanisms that are proposed from the perspective of a botmaster in this dissertation. Analysis of the fine-grained churn measurements for Sality and ZeroAccess indicate that churn in these botnets are similar to that of regular P2P file-sharing networks like Gnutella and Bittorent. In addition, the presence of highly responsive sensor nodes in the botnets are found not influencing the overall churn measurements. This is mainly due to low number of sensor nodes currently deployed in the botnets. Existing and future botnet monitoring mechanisms should apply the findings of this dissertation to ensure high quality monitoring data, and to remain undetected from the bots or the botmasters

    An Automated Methodology for Validating Web Related Cyber Threat Intelligence by Implementing a Honeyclient

    Get PDF
    Loodud töö panustab küberkaitse valdkonda pakkudes alternatiivse viisi, kuidas hoida ohuteadmus andmebaas uuendatuna. Veebilehti kasutatakse ära viisina toimetada pahatahtlik kood ohvrini. Peale veebilehe klassifitseerimist pahaloomuliseks lisatakse see ohuteadmus andmebaasi kui pahaloomulise indikaatorina. Lõppkokkuvõtteks muutuvad sellised andmebaasid mahukaks ja sisaldavad aegunud kirjeid. Lahendus on automatiseerida aegunud kirjete kontrollimist klient-meepott tarkvaraga ning kogu protsess on täielikult automatiseeritav eesmärgiga hoida kokku aega. Jahtides kontrollitud ja kinnitatud indikaatoreid aitab see vältida valedel alustel küberturbe intsidentide menetlemist.This paper is contributing to the open source cybersecurity community by providing an alternative methodology for analyzing web related cyber threat intelligence. Websites are used commonly as an attack vector to spread malicious content crafted by any malicious party. These websites become threat intelligence which can be stored and collected into corresponding databases. Eventually these cyber threat databases become obsolete and can lead to false positive investigations in cyber incident response. The solution is to keep the threat indicator entries valid by verifying their content and this process can be fully automated to keep the process less time consuming. The proposed technical solution is a low interaction honeyclient regularly tasked to verify the content of the web based threat indicators. Due to the huge amount of database entries, this way most of the web based threat indicators can be automatically validated with less time consumption and they can be kept relevant for monitoring purposes and eventually can lead to avoiding false positives in an incident response processes

    Malware detection at runtime for resource-constrained mobile devices: data-driven approach

    Get PDF
    The number of smart and connected mobile devices is increasing, bringing enormous possibilities to users in various domains and transforming everything that we get in touch with into smart. Thus, we have smart watches, smart phones, smart homes, and finally even smart cities. Increased smartness of mobile devices means that they contain more valuable information about their users, more decision making capabilities, and more control over sometimes even life-critical systems. Although, on one side, all of these are necessary in order to enable mobile devices maintain their main purpose to help and support people, on the other, it opens new vulnerabilities. Namely, with increased number and volume of smart devices, also the interest of attackers to abuse them is rising, making their security one of the main challenges. The main mean that the attackers use in order to abuse mobile devices is malicious software, shortly called malware. One way to protect against malware is by using static analysis, that investigates the nature of software by analyzing its static features. However, this technique detects well only known malware and it is prone to obfuscation, which means that it is relatively easy to create a new malicious sample that would be able to pass the radar. Thus, alone, is not powerful enough to protect the users against increasing malicious attacks. The other way to cope with malware is through dynamic analysis, where the nature of the software is decided based on its behavior during its execution on a device. This is a promising solution, because while the code of the software can be easily changed to appear as new, the same cannot be done with ease with its behavior when being executed. However, in order to achieve high accuracy dynamic analysis usually requires computational resources that are beyond suitable for battery-operated mobile devices. This is further complicated if, in addition to detecting the presence of malware, we also want to understand which type of malware it is, in order to trigger suitable countermeasures. Finally, the decisions on potential infections have to happen early enough, to guarantee minimal exposure to the attacks. Fulfilling these requirements in a mobile, battery-operated environments is a challenging task, for which, to the best of our knowledge, a suitable solution is not yet proposed. In this thesis, we pave the way towards such a solution by proposing a dynamic malware detection system that is able to early detect malware that appears at runtime and that provides useful information to discriminate between diverse types of malware while taking into account limited resources of mobile devices. On a mobile device we monitor a set of the representative features for presence of malware and based on them we trigger an alarm if software infection is observed. When this happens, we analyze a set of previously stored information relevant for malware classification, in order to understand what type of malware is being executed. In order to make the detection efficient and suitable for resource-constrained environments of mobile devices, we minimize the set of observed system parameters to only the most informative ones for both detection and classification. Additionally, since sampling period of monitoring infrastructure is directly connected to the power consumption, we take it into account as an important parameter of the development of the detection system. In order to make detection effective, we use dynamic features related to memory, CPU, system calls and network as they reflect well the behavior of a system. Our experiments show that the monitoring with a sampling period of eight seconds gives a good trade-off between detection accuracy, detection time and consumed power. Using it and by monitoring a set of only seven dynamic features (six related to the behavior of memory and one of CPU), we are able to provide a detection solution that satisfies the initial requirements and to detect malware at runtime with F- measure of 0.85, within 85.52 seconds of its execution, and with consumed average power of 20mW. Apart from observed features containing enough information to discriminate between malicious and benign applications, our results show that they can also be used to discriminate between diverse behavior of malware, reflected in different malware families. Using small number of features we are able to identify the presence of the malicious records from the considered family with precision of up to 99.8%. In addition to the standalone use of the proposed detection solution, we have also used it in a hybrid scenario where the applications were first analyzed by a static method, and it was able to detect correctly all the malware previously undetected by static analysis with false positive rate of 3.81% and average detection time of 44.72s. The method, we have designed, tested and validated, has been applied on a smartphone running on Android Operating System. However, since in the design of this method efficient usage of available computational resources was one of our main criteria, we are confident that the method as such can be applied also on the other battery-operated mobile devices of Internet of Things, in order to provide an effective and efficient system able to counter the ever-increasing and ever-evolving number and a variety of malicious attacks

    Robotics 2010

    Get PDF
    Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development

    L'intertextualité dans les publications scientifiques

    No full text
    La base de données bibliographiques de l'IEEE contient un certain nombre de duplications avérées avec indication des originaux copiés. Ce corpus est utilisé pour tester une méthode d'attribution d'auteur. La combinaison de la distance intertextuelle avec la fenêtre glissante et diverses techniques de classification permet d'identifier ces duplications avec un risque d'erreur très faible. Cette expérience montre également que plusieurs facteurs brouillent l'identité de l'auteur scientifique, notamment des collectifs de chercheurs à géométrie variable et une forte dose d'intertextualité acceptée voire recherchée
    corecore