15 research outputs found

    A Survey on Brain Tumor Classification & Detection Techniques

    Get PDF
    A cancerous or non-cancerous mass or growth of abnormal cells in the brain. The research shows that in developed countries the main cause of death of people having brain tumor is incorrect detection of brain tumor. The X-ray, CT, MRI is used for initial diagnostic of the cancer. Today Magnetic Resonance Imaging (MRI) is widely used technique for the detection of brain tumor because it provides the more details then CT. The classification of tumor as a cancerous (malignant) or non cancerous (benign) is very difficult task due to the complexity of brain tissue. In this paper, review of various techniques of classification and detection of brain tumor with the use of Magnetic Resonance Image (MRI) is discussed

    IoT Malware Network Traffic Classification using Visual Representation and Deep Learning

    Get PDF
    With the increase of IoT devices and technologies coming into service, Malware has risen as a challenging threat with increased infection rates and levels of sophistication. Without strong security mechanisms, a huge amount of sensitive data is exposed to vulnerabilities, and therefore, easily abused by cybercriminals to perform several illegal activities. Thus, advanced network security mechanisms that are able of performing a real-time traffic analysis and mitigation of malicious traffic are required. To address this challenge, we are proposing a novel IoT malware traffic analysis approach using deep learning and visual representation for faster detection and classification of new malware (zero-day malware). The detection of malicious network traffic in the proposed approach works at the package level, significantly reducing the time of detection with promising results due to the deep learning technologies used. To evaluate our proposed method performance, a dataset is constructed which consists of 1000 pcap files of normal and malware traffic that are collected from different network traffic sources. The experimental results of Residual Neural Network (ResNet50) are very promising, providing a 94.50% accuracy rate for detection of malware traffic.Comment: 10 pages, 5 figures, 2 table

    Reducing Catastrophic Forgetting in Self-Organizing Maps

    Get PDF
    An agent that is capable of continual or lifelong learning is able to continuously learn from potentially infinite streams of pattern sensory data. One major historic difficulty in building agents capable of such learning is that neural systems struggle to retain previously-acquired knowledge when learning from new data samples. This problem is known as catastrophic forgetting and remains an unsolved problem in the domain of machine learning to this day. To overcome catastrophic forgetting, different approaches have been proposed. One major line of thought advocates the use of memory buffers to store data where the stored data is then used to randomly retrain the model to improve memory retention. However, storing and giving access to previous physical data points results in a variety of practical difficulties particularly with respect to growing memory storage costs. In this work, we propose an alternative way to tackle the problem of catastrophic forgetting, inspired by and building on top of a classical neural model, the self-organizing map (SOM) which is a form of unsupervised clustering. Although the SOM has the potential to combat forgetting through the use of pattern-specializing units, we uncover that it too suffers from the same problem and this forgetting becomes worse when the SOM is trained in a task incremental fashion. To mitigate this, we propose a generalization of the SOM, the continual SOM (c-SOM), which introduces several novel mechanisms to improve its memory retention -- new decay functions and generative resampling schemes to facilitate generative replay in the model. We perform extensive experiments using split-MNIST with these approaches, demonstrating that the c-SOM significantly improves over the classical SOM. Additionally, we come up with a new performance metric alpha_mem to measure the efficacy of SOMs trained in a task incremental fashion, providing a benchmark for other competitive learning models

    A novel dynamic maximum demand reduction controller of battery energy storage system for educational buildings in Malaysia

    Get PDF
    Maximum Demand (MD) management is essential to help businesses and electricity companies saves on electricity bills and operation cost. Among different MD reduction techniques, demand response with battery energy storage systems (BESS) provides the most flexible peak reduction solution for various markets. One of the major challenges is the optimization of the demand threshold that controls the charging and discharging powers of BESS. To increase its tolerance to day-ahead prediction errors, state-of-art controllers utilize complex prediction models and rigid parameters that are determined from long-term historical data. However, long-term historical data may be unavailable at implementation, and rigid parameters cause them unable to adapt to evolving load patterns. Hence, this research work proposes a novel incremental DB-SOINN-R prediction model and a novel dynamic two-stage MD reduction controller. The incremental learning capability of the novel DB-SOINN-R allows the model to be deployed as soon as possible and improves its prediction accuracy as time progresses. The proposed DB-SOINN-R is compared with five models: feedforward neural network, deep neural network with long-short-term memory, support vector regression, ESOINN, and k-nearest neighbour (kNN) regression. They are tested on day-ahead and one-hour-ahead load predictions using two different datasets. The proposed DB-SOINN-R has the highest prediction accuracy among all models with incremental learning in both datasets. The novel dynamic two-stage maximum demand reduction controller of BESS incorporates one-hour-ahead load profiles to refine the threshold found based on day-ahead load profiles for preventing peak reduction failure, if necessary, with no rigid parameters required. Compared to the conventional fixed threshold, single-stage, and fuzzy controllers, the proposed two-stage controller achieves up to 6.82% and 306.23% higher in average maximum demand reduction and total maximum demand charge savings, respectively, on two different datasets. The proposed controller also achieves a 0% peak demand reduction failure rate in both datasets. The real-world performance of the proposed two-stage MD reduction controller that includes the proposed DB-SOINN-R models is validated in a scaled-down experiment setup. Results show negligible differences of 0.5% in daily PDRP and MAPE between experimental and simulation results. Therefore, it fulfilled the aim of this research work, which is to develop a controller that is easy to implement, requires minimal historical data to begin operation and has a reliable MD reduction performance

    A novel dynamic maximum demand reduction controller of battery energy storage system for educational buildings in Malaysia

    Get PDF
    Maximum Demand (MD) management is essential to help businesses and electricity companies saves on electricity bills and operation cost. Among different MD reduction techniques, demand response with battery energy storage systems (BESS) provides the most flexible peak reduction solution for various markets. One of the major challenges is the optimization of the demand threshold that controls the charging and discharging powers of BESS. To increase its tolerance to day-ahead prediction errors, state-of-art controllers utilize complex prediction models and rigid parameters that are determined from long-term historical data. However, long-term historical data may be unavailable at implementation, and rigid parameters cause them unable to adapt to evolving load patterns. Hence, this research work proposes a novel incremental DB-SOINN-R prediction model and a novel dynamic two-stage MD reduction controller. The incremental learning capability of the novel DB-SOINN-R allows the model to be deployed as soon as possible and improves its prediction accuracy as time progresses. The proposed DB-SOINN-R is compared with five models: feedforward neural network, deep neural network with long-short-term memory, support vector regression, ESOINN, and k-nearest neighbour (kNN) regression. They are tested on day-ahead and one-hour-ahead load predictions using two different datasets. The proposed DB-SOINN-R has the highest prediction accuracy among all models with incremental learning in both datasets. The novel dynamic two-stage maximum demand reduction controller of BESS incorporates one-hour-ahead load profiles to refine the threshold found based on day-ahead load profiles for preventing peak reduction failure, if necessary, with no rigid parameters required. Compared to the conventional fixed threshold, single-stage, and fuzzy controllers, the proposed two-stage controller achieves up to 6.82% and 306.23% higher in average maximum demand reduction and total maximum demand charge savings, respectively, on two different datasets. The proposed controller also achieves a 0% peak demand reduction failure rate in both datasets. The real-world performance of the proposed two-stage MD reduction controller that includes the proposed DB-SOINN-R models is validated in a scaled-down experiment setup. Results show negligible differences of 0.5% in daily PDRP and MAPE between experimental and simulation results. Therefore, it fulfilled the aim of this research work, which is to develop a controller that is easy to implement, requires minimal historical data to begin operation and has a reliable MD reduction performance

    Prototype Regularized Manifold Regularization Technique for Semi-Supervised Online Extreme Learning Machine

    Get PDF
    Data streaming applications such as the Internet of Things (IoT) require processing or predicting from sequential data from various sensors. However, most of the data are unlabeled, making applying fully supervised learning algorithms impossible. The online manifold regularization approach allows sequential learning from partially labeled data, which is useful for sequential learning in environments with scarcely labeled data. Unfortunately, the manifold regularization technique does not work out of the box as it requires determining the radial basis function (RBF) kernel width parameter. The RBF kernel width parameter directly impacts the performance as it is used to inform the model to which class each piece of data most likely belongs. The width parameter is often determined off-line via hyperparameter search, where a vast amount of labeled data is required. Therefore, it limits its utility in applications where it is difficult to collect a great deal of labeled data, such as data stream mining. To address this issue, we proposed eliminating the RBF kernel from the manifold regularization technique altogether by combining the manifold regularization technique with a prototype learning method, which uses a finite set of prototypes to approximate the entire data set. Compared to other manifold regularization approaches, this approach instead queries the prototype-based learner to find the most similar samples for each sample instead of relying on the RBF kernel. Thus, it no longer necessitates the RBF kernel, which improves its practicality. The proposed approach can learn faster and achieve a higher classification performance than other manifold regularization techniques based on experiments on benchmark data sets. Results showed that the proposed approach can perform well even without using the RBF kernel, which improves the practicality of manifold regularization techniques for semi-supervised learning

    Incremental learning algorithms and applications

    Get PDF
    International audienceIncremental learning refers to learning from streaming data, which arrive over time, with limited memory resources and, ideally, without sacrificing model accuracy. This setting fits different application scenarios where lifelong learning is relevant, e.g. due to changing environments , and it offers an elegant scheme for big data processing by means of its sequential treatment. In this contribution, we formalise the concept of incremental learning, we discuss particular challenges which arise in this setting, and we give an overview about popular approaches, its theoretical foundations, and applications which emerged in the last years

    Class-incremental lifelong object learning for domestic robots

    Get PDF
    Traditionally, robots have been confined to settings where they operate in isolation and in highly controlled and structured environments to execute well-defined non-varying tasks. As a result, they usually operate without the need to perceive their surroundings or to adapt to changing stimuli. However, as robots start to move towards human-centred environments and share the physical space with people, there is an urgent need to endow them with the flexibility to learn and adapt given the changing nature of the stimuli they receive and the evolving requirements of their users. Standard machine learning is not suitable for these types of applications because it operates under the assumption that data samples are independent and identically distributed, and requires access to all the data in advance. If any of these assumptions is broken, the model fails catastrophically, i.e., either it does not learn or it forgets all that was previously learned. Therefore, different strategies are required to address this problem. The focus of this thesis is on lifelong object learning, whereby a model is able to learn from data that becomes available over time. In particular we address the problem of classincremental learning with an emphasis on algorithms that can enable interactive learning with a user. In class-incremental learning, models learn from sequential data batches where each batch can contain samples coming from ideally a single class. The emphasis on interactive learning capabilities poses additional requirements in terms of the speed with which model updates are performed as well as how the interaction is handled. The work presented in this thesis can be divided into two main lines of work. First, we propose two versions of a lifelong learning algorithm composed of a feature extractor based on pre-trained residual networks, an array of growing self-organising networks and a classifier. Self-organising networks are able to adapt their structure based on the input data distribution, and learn representative prototypes of the data. These prototypes can then be used to train a classifier. The proposed approaches are evaluated on various benchmarks under several conditions and the results show that they outperform competing approaches in each case. Second, we propose a robot architecture to address lifelong object learning through interactions with a human partner using natural language. The architecture consists of an object segmentation, tracking and preprocessing pipeline, a dialogue system, and a learning module based on the algorithm developed in the first part of the thesis. Finally, the thesis also includes an exploration into the contributions that different preprocessing operations have on performance when learning from both RGB and Depth images.James Watt Scholarshi

    Novel neural approaches to data topology analysis and telemedicine

    Get PDF
    1noL'abstract è presente nell'allegato / the abstract is in the attachmentopen676. INGEGNERIA ELETTRICAnoopenRandazzo, Vincenz

    Statistical feature ordering for neural-based incremental attribute learning

    Get PDF
    In pattern recognition, better classification or regression results usually depend on highly discriminative features (also known as attributes) of datasets. Machine learning plays a significant role in the performance improvement for classification and regression. Different from the conventional machine learning approaches which train all features in one batch by some predictive algorithms like neural networks and genetic algorithms, Incremental Attribute Learning (IAL) is a novel supervised machine learning approach which gradually trains one or more features step by step. Such a strategy enables features with greater discrimination abilities to be trained in an earlier step, and avoids interference among relevant features. Previous studies have confirmed that IAL is able to generate accurate results with lower error rates. If features with different discrimination abilities are sorted in different training order, the final results may be strongly influenced. Therefore, the way to sequentially sort features with some orderings and simultaneously reduce the pattern recognition error rates based on IAL inevitably becomes an important issue in this study. Compared with the applicable yet time-consuming contribution-based feature ordering methods which were derived in previous studies, more efficient feature ordering approaches for IAL are presented to tackle classification problems in this study. In the first approach, feature orderings are calculated by statistical correlations between input and output. The second approach is based on mutual information, which employs minimal-redundancy-maximal- relevance criterion (mRMR), a well-known feature selection method, for feature ordering. The third method is improved by Fisher's Linear Discriminant (FLD). Firstly, Single Discriminability (SD) of features is presented based on FLD, which can cope with both univariate and multivariate output classification problems. Secondly, a new feature ordering metric called Accumulative Discriminability (AD) is developed based on SD. This metric is designed for IAL classification with dynamic feature dimensions. It computes the multidimensional feature discrimination ability in each step for all imported features including those imported in previous steps during the IAL training. AD can be treated as a metric for accumulative effect, while SD only measures the one-dimensional feature discrimination ability in each step. Experimental results show that all these three approaches can exhibit better performance than the conventional one-batch training method. Furthermore, the results of AD are the best of the three, because AD is much fitter for the properties of IAL, where feature number in IAL is increasing. Moreover, studies on the combination use of feature ordering and selection in IAL is also presented in this thesis. As a pre-process of machine learning for pattern recognition, sometimes feature orderings are inevitably employed together with feature selection. Experimental results show that at times these integrated approaches can obtain a better performance than non-integrated approaches yet sometimes not. Additionally, feature ordering approaches for solving regression problems are also demonstrated in this study. Experimental results show that a proper feature ordering is also one of the key elements to enhance the accuracy of the results obtained
    corecore