27,637 research outputs found
Neural networks as tool to improve the intrusion detection system
Nowadays, computer programs affecting computers both locally and network-wide have led to the design and development of different preventive and corrective strategies to remedy computer security problems. This dynamic has been important for the understanding of the structure of attacks and how best to counteract them, making sure that their impact is less than expected by the attacker. For this research, a simulation was carried out using the DATASET-KDD NSL at 100%, generating an experimental environment, where processes of pre-processing, training, classification, and evaluation of model quality metrics were carried out. Likewise, a comparative analysis of the results obtained after implementing different feature selection techniques (INFO.GAIN, GAIN RATIO, and ONE R), and classification techniques based on neural networks that use an unsupervised learning algorithm based on self-organizing maps (SOM and GHSOM), with the purpose of classifying bi-class network traffic automatically. From the above, a 97.09% hit rate was obtained with 21 features by implementing the GHSOM classifier with 10-fold cross-validation with the ONE R feature selection technique, which would improve the efficiency and performance of Intrusion Detection Systems (IDS)
Secure Mobile Crowdsensing with Deep Learning
In order to stimulate secure sensing for Internet of Things (IoT)
applications such as healthcare and traffic monitoring, mobile crowdsensing
(MCS) systems have to address security threats, such as jamming, spoofing and
faked sensing attacks, during both the sensing and the information exchange
processes in large-scale dynamic and heterogenous networks. In this article, we
investigate secure mobile crowdsensing and present how to use deep learning
(DL) methods such as stacked autoencoder (SAE), deep neural network (DNN), and
convolutional neural network (CNN) to improve the MCS security approaches
including authentication, privacy protection, faked sensing countermeasures,
intrusion detection and anti-jamming transmissions in MCS. We discuss the
performance gain of these DL-based approaches compared with traditional
security schemes and identify the challenges that need to be addressed to
implement them in practical MCS systems.Comment: 7 pages, 5 figure
A State-of-the-art Survey on IDS for Mobile Ad-Hoc Networks and Wireless Mesh Networks
An Intrusion Detection System (IDS) detects malicious and selfish nodes in a
network. Ad hoc networks are often secured by using either intrusion detection
or by secure routing. Designing efficient IDS for wireless ad-hoc networks that
would not affect the performance of the network significantly is indeed a
challenging task. Arguably, the most common thing in a review paper in the
domain of wireless networks is to compare the performances of different
solutions using simulation results. However, variance in multiple configuration
aspects including that due to different underlying routing protocols, makes the
task of simulation based comparative evaluation of IDS solutions somewhat
unrealistic. In stead, the authors have followed an analytic approach to
identify the gaps in the existing IDS solutions for MANETs and wireless mesh
networks. The paper aims to ease the job of a new researcher by exposing him to
the state of the art research issues on IDS. Nearly 80% of the works cited in
this paper are published with in last 3 to 4 years.Comment: Accepted for publication in PDCTA 2011 to be held in Chennair during
September 25-27, 201
Fast Enhanced CT Metal Artifact Reduction using Data Domain Deep Learning
Filtered back projection (FBP) is the most widely used method for image
reconstruction in X-ray computed tomography (CT) scanners. The presence of
hyper-dense materials in a scene, such as metals, can strongly attenuate
X-rays, producing severe streaking artifacts in the reconstruction. These metal
artifacts can greatly limit subsequent object delineation and information
extraction from the images, restricting their diagnostic value. This problem is
particularly acute in the security domain, where there is great heterogeneity
in the objects that can appear in a scene, highly accurate decisions must be
made quickly. The standard practical approaches to reducing metal artifacts in
CT imagery are either simplistic non-adaptive interpolation-based projection
data completion methods or direct image post-processing methods. These standard
approaches have had limited success. Motivated primarily by security
applications, we present a new deep-learning-based metal artifact reduction
(MAR) approach that tackles the problem in the projection data domain. We treat
the projection data corresponding to metal objects as missing data and train an
adversarial deep network to complete the missing data in the projection domain.
The subsequent complete projection data is then used with FBP to reconstruct
image intended to be free of artifacts. This new approach results in an
end-to-end MAR algorithm that is computationally efficient so practical and
fits well into existing CT workflows allowing easy adoption in existing
scanners. Training deep networks can be challenging, and another contribution
of our work is to demonstrate that training data generated using an accurate
X-ray simulation can be used to successfully train the deep network when
combined with transfer learning using limited real data sets. We demonstrate
the effectiveness and potential of our algorithm on simulated and real
examples.Comment: Accepted for publication in IEEE Transactions on Computational
Imagin
Deep Convolutional Neural Network-Based Autonomous Drone Navigation
This paper presents a novel approach for aerial drone autonomous navigation
along predetermined paths using only visual input form an onboard camera and
without reliance on a Global Positioning System (GPS). It is based on using a
deep Convolutional Neural Network (CNN) combined with a regressor to output the
drone steering commands. Furthermore, multiple auxiliary navigation paths that
form a navigation envelope are used for data augmentation to make the system
adaptable to real-life deployment scenarios. The approach is suitable for
automating drone navigation in applications that exhibit regular trips or
visits to same locations such as environmental and desertification monitoring,
parcel/aid delivery and drone-based wireless internet delivery. In this case,
the proposed algorithm replaces human operators, enhances accuracy of GPS-based
map navigation, alleviates problems related to GPS-spoofing and enables
navigation in GPS-denied environments. Our system is tested in two scenarios
using the Unreal Engine-based AirSim plugin for drone simulation with promising
results of average cross track distance less than 1.4 meters and mean waypoints
minimum distance of less than 1 meter
A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering
The growing demand of industrial, automotive and service robots presents a
challenge to the centralized Cloud Robotics model in terms of privacy,
security, latency, bandwidth, and reliability. In this paper, we present a `Fog
Robotics' approach to deep robot learning that distributes compute, storage and
networking resources between the Cloud and the Edge in a federated manner. Deep
models are trained on non-private (public) synthetic images in the Cloud; the
models are adapted to the private real images of the environment at the Edge
within a trusted network and subsequently, deployed as a service for
low-latency and secure inference/prediction for other robots in the network. We
apply this approach to surface decluttering, where a mobile robot picks and
sorts objects from a cluttered floor by learning a deep object recognition and
a grasp planning model. Experiments suggest that Fog Robotics can improve
performance by sim-to-real domain adaptation in comparison to exclusively using
Cloud or Edge resources, while reducing the inference cycle time by 4\times to
successfully declutter 86% of objects over 213 attempts.Comment: IEEE International Conference on Robotics and Automation, ICRA, 201
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
Reconstruction of C&C Channel for P2P Botnet
Breaking down botnets have always been a big challenge. The robustness of C&C
channels is increased, and the detection of botmaster is harder in P2P botnets.
In this paper, we propose a probabilistic method to reconstruct the topologies
of the C&C channel for P2P botnets. Due to the geographic dispersion of P2P
botnet members, it is not possible to supervise all members, and there does not
exist all necessary data for applying other graph reconstruction methods. So
far, no general method has been introduced to reconstruct C&C channel topology
for all type of P2P botnet. In our method, the probability of connections
between bots is estimated by using the inaccurate receiving times of several
cascades, network model parameters of C&C channel, and end-to-end delay
distribution of the Internet. The receiving times can be collected by observing
the external reaction of bots to commands. The results of our simulations show
that more than 90% of the edges in a 1000-member network with node degree mean
50, have been accurately estimated by collecting the inaccurate receiving times
of 22 cascades. In case the receiving times of just half of the bots are
collected, this accuracy of estimation is obtained by using 95 cascades.Comment: This paper is a preprint of a paper accepted by IET Communications
and is subject to Institution of Engineering and Technology Copyright. When
the final version is published, the copy of record will be available at the
IET Digital Librar
A Design Blueprint for Virtual Organizations in a Service Oriented Landscape
"United we stand, divided we fall" is a well known saying. We are living in
the era of virtual collaborations. Advancement on conceptual and technological
level has enhanced the way people communicate. Everything-as-a-Service once a
dream, now becoming a reality.
Problem nature has also been changed over the time. Today, e-Collaborations
are applied to all the domains possible. Extensive data and computing resources
are in need and assistance from human experts is also becoming essential. This
puts a great responsibility on Information Technology (IT) researchers and
developers to provide generic platforms where user can easily communicate and
solve their problems. To realize this concept, distributed computing has
offered many paradigms, e.g. cluster, grid, cloud computing. Virtual
Organization (VO) is a logical orchestration of globally dispersed resources to
achieve common goals.
Existing paradigms and technology are used to form Virtual Organization, but
lack of standards remained a critical issue for last two decades. Our research
endeavor focuses on developing a design blueprint for Virtual Organization
building process. The proposed standardization process is a two phase activity.
First phase provides requirement analysis and the second phase presents a
Reference Architecture for Virtual Organization (RAVO). This form of
standardization is chosen to accommodate both technological and paradigm shift.
We categorize our efforts in two parts. First part consists of a pattern to
identify the requirements and components of a Virtual Organization. Second part
details a generic framework based on the concept of Everything-as-a-Service
The ISTI Rapid Response on Exploring Cloud Computing 2018
This report describes eighteen projects that explored how commercial cloud
computing services can be utilized for scientific computation at national
laboratories. These demonstrations ranged from deploying proprietary software
in a cloud environment to leveraging established cloud-based analytics
workflows for processing scientific datasets. By and large, the projects were
successful and collectively they suggest that cloud computing can be a valuable
computational resource for scientific computation at national laboratories
- …