38 research outputs found
An adaptive model generation technique for model-based video coding
Center for Digital Signal Processing for Multimedia Applications, Department of Electronic and Information EngineeringVersion of RecordPublishe
Dendritic Cells for Anomaly Detection
Artificial immune systems, more specifically the negative selection
algorithm, have previously been applied to intrusion detection. The aim of this
research is to develop an intrusion detection system based on a novel concept
in immunology, the Danger Theory. Dendritic Cells (DCs) are antigen presenting
cells and key to the activation of the human signals from the host tissue and
correlate these signals with proteins know as antigens. In algorithmic terms,
individual DCs perform multi-sensor data fusion based on time-windows. The
whole population of DCs asynchronously correlates the fused signals with a
secondary data stream. The behaviour of human DCs is abstracted to form the DC
Algorithm (DCA), which is implemented using an immune inspired framework,
libtissue. This system is used to detect context switching for a basic machine
learning dataset and to detect outgoing portscans in real-time. Experimental
results show a significant difference between an outgoing portscan and normal
traffic.Comment: 8 pages, 10 tables, 4 figures, IEEE Congress on Evolutionary
Computation (CEC2006), Vancouver, Canad
Robust Inversion Methods for Aerosol Spectroscopy
The Fast Aerosol Spectrometer (FASP) is a device for spectral aerosol
measurements. Its purpose is to safely monitor the atmosphere inside a reactor
containment. First we describe the FASP and explain its basic physical laws.
Then we introduce our reconstruction methods for aerosol particle size
distributions designed for the FASP. We extend known existence results for
constrained Tikhonov regularization by uniqueness criteria and use those to
generate reasonable models for the size distributions. We apply a Bayesian
model-selection framework on these pre-generated models. We compare our
algorithm with classical inversion methods using simulated measurements. We
then extend our reconstruction algorithm for two-component aerosols, so that we
can simultaneously retrieve their particle-size distributions and unknown
volume fractions of their two components. Finally we present the results of a
numerical study for the extended algorithm.Comment: 37 pages, 3 figure
TESTABILITY OF INFORMATION LEAK IN THE SOURCE CODE FOR INDEPENDENT TEST ORGANIZATION BY USING BACK PROPAGATION ALGORITHM
A strategy for software testing integrates the design of software test cases into a wellplanned series of steps that results in a successful development of the software security. The strategy provides the secure source code test by Independent Test Organization (ITO) that describes the steps to be taken, when, and how much effort, time, and resources will be required. The strategy incorporates test planning, test case design, test execution, test result collection and test leak information and evaluation. In this work we speak about the testability of leak information in source code and how to detect and protect it inside the ITO. In this paper we present a privacy preserving algorithm for the neural network learning to detect and protect the leak information in source code between two parties the programmer (source code) and Independent Test Organization (Sensor). We show that our algorithm is very secure and the sensor inside Independent Test Organization is able to detect and protect all leaks information inside the source code. We demonstrate the efficiency of our algorithm by experiments on real world data. We present new technology for software Security using Back Propagation algorithm. That is embedded sensor to analyze the source code inside the ITO. By using embedded sensor we can detect and protect in real time all the attacks or leaks of information inside the source code. The connection between an Artificial Neural Networks and source code analysis inside Independent Test Organization is providing a great help for the software security
An Automatically Tuning Intrusion Detection System
An intrusion detection system (IDS) is a security layer used to detect ongoing intrusive activities in information systems. Traditionally, intrusion detection relies on extensive knowledge of security experts, in particular, on their familiarity with the computer system to be protected. To reduce this dependence, various data-mining and machine learning techniques have been deployed for intrusion detection. An IDS is usually working in a dynamically changing environment, which forces continuous tuning of the intrusion detection model, in order to maintain sufficient performance. The manual tuning process required by current systems depends on the system operators in working out the tuning solution and in integrating it into the detection model. In this paper, an automatically tuning IDS (ATIDS) is presented. The proposed system will automatically tune the detection model on-the-fly according to the feedback provided by the system operator when false predictions are encountered. The system is evaluated using the KDDCup\u2799 intrusion detection dataset. Experimental results show that the system achieves up to 35% improvement in terms of misclassification cost when compared with a system lacking the tuning feature. If only 10% false predictions are used to tune the model, the system still achieves about 30% improvement. Moreover, when tuning is not delayed too long, the system can achieve about 20% improvement, with only 1.3% of the false predictions used to tune the model. The results of the experiments show that a practical system can be built based on ATIDS: system operators can focus on verification of predictions with low confidence, as only those predictions determined to be false will be used to tune the detection model
Behavior Profiling of Email
This paper describes the forensic and intelligence analysis capabilities of the Email Mining Toolkit (EMT) under development at the Columbia Intrusion Detection (IDS) Lab. EMT provides the means of loading, parsing and analyzing email logs, including content, in a wide range of formats. Many tools and techniques have been available from the fields of Information Retrieval (IR) and Natural Language Processing (NLP) for analyzing documents of various sorts, including emails. EMT, however, extends these kinds of analyses with an entirely new set of analyses that model "user behavior." EMT thus models the behavior of individual user email accounts, or groups of accounts, including the "social cliques" revealed by a user's email behavior
Edge-Based Health Care Monitoring System: Ensemble of Classifier Based Model
Health Monitoring System (HMS) is an excellent tool that actually saves lives. It makes use of transmitters to gather information and transmits it wirelessly to a receiver. Essentially, it is much more practical than the large equipment that the majority of hospitals now employ and continuously checks a patient's health data 24/7. The primary goal of this research is to develop a three-layered Ensemble of Classifier model on Edge based Healthcare Monitoring System (ECEHMS) and Gauss Iterated Pelican Optimization Algorithm (GIPOA) including data collection layer, data analytics layer, and presentation layer. As per our ECEHMS-GIPOA, the healthcare dataset is collected from the UCI repository. The data analytics layer performs preprocessing, feature extraction, dimensionality reduction and classification. Data normalization will be done in preprocessing step. Statistical features (Min/Max, SD, Mean, Median), improved higher order statistical features (Skewness, Kurtosis, Entropy), and Technical indicator based features were extracted during Feature Extraction step. Improved Fuzzy C-means clustering (FCM) will be used for handling the Dimensionality reduction issue by clustering the appropriate feature set from the extracted features. Ensemble model is introduced to predict the disease stage that including the models like Deep Maxout Network (DMN), Improved Deep Belief Network (IDBN), and Recurrent Neural Network (RNN). Also, the enhancement in prediction/classification accuracy is assured via optimal training. For which, a GIPOA is introduced. Finally, ECEHMS-GIPOA performance is compared with other conventional approaches like ASO, BWO, SLO, SSO, FPA, and POA