310 research outputs found

    Understanding malware autostart techniques with web data extraction

    Get PDF
    The purpose of this study was to investigate automatic execution methods in Windows operating systems, as used and abused by malware. Using data extracted from the Web, information on over 10,000 malware specimens was collected and analyzed, and trends were discovered and presented. Correlations were found between these records and a list of known autostart locations for various versions of Windows. All programming was written in PHP, which proved very effective. A full breakdown of the popularity of each method per year was constructed. It was found that the popularity of many methods has varied greatly over the last decade, mostly following operating system releases and security improvements, but with some frightening exceptions

    Distributed Processing and Analytics of IoT data in Edge Cloud

    Get PDF
    Sensors of different kinds connect to the IoT network and generate a large number of data streams. We explore the possibility of performing stream processing at the network edge and an architecture to do so. This thesis work is based on a prototype solution developed by Nokia. The system operates close to the data sources and retrieves the data based on requests made by applications through the system. Processing the data close to the place where it is generated can save bandwidth and assist in decision making. This work proposes a processing component operating at the far edge. The applicability of the prototype solution given the proposed processing component was illustrated in three use cases. Those use cases involve analysis performed on values of Key Performance Indicators, data streams generated by air quality sensors called Sensordrones, and recognizing car license plates by an application of deep learning

    Cyber-Attack Drone Payload Development and Geolocation via Directional Antennae

    Get PDF
    The increasing capabilities of commercial drones have led to blossoming drone usage in private sector industries ranging from agriculture to mining to cinema. Commercial drones have made amazing improvements in flight time, flight distance, and payload weight. These same features also offer a unique and unprecedented commodity for wireless hackers -- the ability to gain ‘physical’ proximity to a target without personally having to be anywhere near it. This capability is called Remote Physical Proximity (RPP). By their nature, wireless devices are largely susceptible to sniffing and injection attacks, but only if the attacker can interact with the device via physical proximity. A properly outfitted drone can increase the attack surface with RPP (adding a range of over 7 km using off-the-shelf drones), allowing full interactivity with wireless targets while the attacker can remain distant and hidden. Combined with the novel approach of using a directional antenna, these drones could also provide the means to collect targeted geolocation information of wireless devices from long distances passively, which is of significant value from an offensive cyberwarfare standpoint. This research develops skypie, a software and hardware framework designed for performing remote, directional drone-based collections. The prototype is inexpensive, lightweight, and totally independent of drone architecture, meaning it can be strapped to most medium to large commercial drones. The prototype effectively simulates the type of device that could be built by a motivated threat actor, and the development process evaluates strengths and shortcoming posed by these devices. This research also experimentally evaluates the ability of a drone-based attack system to track its targets by passively sniffing Wi-Fi signals from distances of 300 and 600 meters using a directional antenna. Additionally, it identifies collection techniques and processing algorithms for minimizing geolocation errors. Results show geolocation via 802.11 emissions (Wi-Fi) using a portable directional antenna is possible, but difficult to achieve the accuracy that GPS delivers (errors less than 5 m with 95% confidence). This research shows that geolocation predictions of a target cell phone acting as a Wi-Fi access point in a field from 300 m away is accurate within 70.1 m from 300 m away and within 76 meters from 600 m away. Three of the four main tests exceed the hypothesized geolocation error of 15% of the sensor-to-target distance, with tests 300 m away averaging 25.5% and tests 600 m away averaging at 34%. Improvements in bearing prediction are needed to reduce error to more tolerable quantities, and this thesis discusses several recommendations to do so. This research ultimately assists in developing operational drone-borne cyber-attack and reconnaissance capabilities, identifying limitations, and enlightening the public of countermeasures to mitigate the privacy threats posed by the inevitable rise of the cyber-attack drone

    A Big Data and machine learning approach for network monitoring and security

    Get PDF
    In the last decade the performances of 802.11 (Wi-Fi) devices skyrocketed. Today it is possible to realize gigabit wireless links spanning across kilometers at a fraction of the cost of the wired equivalent. In the same period, mesh network evolved from being experimental tools confined into university labs, to systems running in several real world scenarios. Mesh networks can now provide city-wide coverage and can compete on the market of Internet access. Yet, being wireless distributed networks, mesh networks are still hard to maintain and monitor. This paper explains how today we can perform monitoring, anomaly detection and root cause analysis in mesh networks using Big Data techniques. It first describes the architecture of a modern mesh network, it justifies the use of Big Data techniques and provides a design for the storage and analysis of Big Data produced by a large-scale mesh network. While proposing a generic infrastructure, we focus on its application in the security domain

    Learning by doing on the EGEE GRID and first performance analysis of CODESA-3D multirun submission

    Get PDF
    The project TEMA (Training on Environmental Modelling and Applications) is a CRS4 training initiative in the field of computational hydrology and grid computing (Jan-Sept, 2006). The personnel involved were Fabrizio Murgia (trainee) and Giuditta Lecca (tutor). The objectives of the project were: " To aquire specialized skills about grid computing with special emphasis on computational sub-surface hydrology; " To develop and test software procedures to run Monte Carlo simulations on the EGEE production grid; " To produce a technical report and some seminars about grid computing. The aquired competences and skills will be used in the ongoing projects GRIDA3, CyberSAR and DEGREE

    A Machine Learning Enhanced Scheme for Intelligent Network Management

    Get PDF
    The versatile networking services bring about huge influence on daily living styles while the amount and diversity of services cause high complexity of network systems. The network scale and complexity grow with the increasing infrastructure apparatuses, networking function, networking slices, and underlying architecture evolution. The conventional way is manual administration to maintain the large and complex platform, which makes effective and insightful management troublesome. A feasible and promising scheme is to extract insightful information from largely produced network data. The goal of this thesis is to use learning-based algorithms inspired by machine learning communities to discover valuable knowledge from substantial network data, which directly promotes intelligent management and maintenance. In the thesis, the management and maintenance focus on two schemes: network anomalies detection and root causes localization; critical traffic resource control and optimization. Firstly, the abundant network data wrap up informative messages but its heterogeneity and perplexity make diagnosis challenging. For unstructured logs, abstract and formatted log templates are extracted to regulate log records. An in-depth analysis framework based on heterogeneous data is proposed in order to detect the occurrence of faults and anomalies. It employs representation learning methods to map unstructured data into numerical features, and fuses the extracted feature for network anomaly and fault detection. The representation learning makes use of word2vec-based embedding technologies for semantic expression. Next, the fault and anomaly detection solely unveils the occurrence of events while failing to figure out the root causes for useful administration so that the fault localization opens a gate to narrow down the source of systematic anomalies. The extracted features are formed as the anomaly degree coupled with an importance ranking method to highlight the locations of anomalies in network systems. Two types of ranking modes are instantiated by PageRank and operation errors for jointly highlighting latent issue of locations. Besides the fault and anomaly detection, network traffic engineering deals with network communication and computation resource to optimize data traffic transferring efficiency. Especially when network traffic are constrained with communication conditions, a pro-active path planning scheme is helpful for efficient traffic controlling actions. Then a learning-based traffic planning algorithm is proposed based on sequence-to-sequence model to discover hidden reasonable paths from abundant traffic history data over the Software Defined Network architecture. Finally, traffic engineering merely based on empirical data is likely to result in stale and sub-optimal solutions, even ending up with worse situations. A resilient mechanism is required to adapt network flows based on context into a dynamic environment. Thus, a reinforcement learning-based scheme is put forward for dynamic data forwarding considering network resource status, which explicitly presents a promising performance improvement. In the end, the proposed anomaly processing framework strengthens the analysis and diagnosis for network system administrators through synthesized fault detection and root cause localization. The learning-based traffic engineering stimulates networking flow management via experienced data and further shows a promising direction of flexible traffic adjustment for ever-changing environments

    Application of a Layered Hidden Markov Model in the Detection of Network Attacks

    Get PDF
    Network-based attacks against computer systems are a common and increasing problem. Attackers continue to increase the sophistication and complexity of their attacks with the goal of removing sensitive data or disrupting operations. Attack detection technology works very well for the detection of known attacks using a signature-based intrusion detection system. However, attackers can utilize attacks that are undetectable to those signature-based systems whether they are truly new attacks or modified versions of known attacks. Anomaly-based intrusion detection systems approach the problem of attack detection by detecting when traffic differs from a learned baseline. In the case of this research, the focus was on a relatively new area known as payload anomaly detection. In payload anomaly detection, the system focuses exclusively on the payload of packets and learns the normal contents of those payloads. When a payload\u27s contents differ from the norm, an anomaly is detected and may be a potential attack. A risk with anomaly-based detection mechanisms is they suffer from high false positive rates which reduce their effectiveness. This research built upon previous research in payload anomaly detection by combining multiple techniques of detection in a layered approach. The layers of the system included a high-level navigation layer, a request payload analysis layer, and a request-response analysis layer. The system was tested using the test data provided by some earlier payload anomaly detection systems as well as new data sets. The results of the experiments showed that by combining these layers of detection into a single system, there were higher detection rates and lower false positive rates

    An OSINT Approach to Automated Asset Discovery and Monitoring

    Get PDF
    The main objective of this thesis is to improve the efficiency of security operations centersthrough the articulation of different publicly open sources of security related feeds. This ischallenging because of the different abstraction models of the feeds that need to be madecompatible, of the range of control values that each data source can have and that will impactthe security events, and of the scalability of computational and networking resources that arerequired to collect security events.Following the industry standards proposed by the literature (OSCP guide, PTES andOWASP), the detection of hosts and sub-domains using an articulation of several sources isregarded as the first interaction in an engagement. This first interaction often misses somesources that could allow the disclosure of more assets. This became important since networkshave scaled up to the cloud, where IP address range is not owned by the company, andimportant applications are often shared within the same IP, like the example of Virtual Hoststo host several application in the same server.We will focus on the first step of any engagement, the enumeration of the target network.Attackers often use several techniques to enumerate the target to discover vulnerable services.This enumeration could be improved by the addition of several other sources and techniquesthat are often left aside from the literature. Also, by creating an automated process it ispossible for security operation centers to discover these assets and map the applicationsin use to keep track of said vulnerabilities using OSINT techniques and publicly availablesolutions, before the attackers try to exploit the service. This gives a vision of the Internetfacing services often seen by attackers without querying the service directly evading thereforedetection. This research is in frame with the complete engagement process and should beintegrate in already built solutions, therefore the results should be able to connect to additionalapplications in order to reach forward in the engagement process.By addressing these challenges we expect to come in great aid of sysadmin and securityteams, helping them with the task of securing their assets and ensuring security cleanlinessof the enterprise resulting in a better policy compliance without ever connecting to the clienthosts

    Applications Development for the Computational Grid

    Get PDF
    • …
    corecore