3,264 research outputs found

    GRASE: Granulometry Analysis with Semi Eager Classifier to Detect Malware

    Get PDF
    Technological advancement in communication leading to 5G, motivates everyone to get connected to the internet including ‘Devices’, a technology named Web of Things (WoT). The community benefits from this large-scale network which allows monitoring and controlling of physical devices. But many times, it costs the security as MALicious softWARE (MalWare) developers try to invade the network, as for them, these devices are like a ‘backdoor’ providing them easy ‘entry’. To stop invaders from entering the network, identifying malware and its variants is of great significance for cyberspace. Traditional methods of malware detection like static and dynamic ones, detect the malware but lack against new techniques used by malware developers like obfuscation, polymorphism and encryption. A machine learning approach to detect malware, where the classifier is trained with handcrafted features, is not potent against these techniques and asks for efforts to put in for the feature engineering. The paper proposes a malware classification using a visualization methodology wherein the disassembled malware code is transformed into grey images. It presents the efficacy of Granulometry texture analysis technique for improving malware classification. Furthermore, a Semi Eager (SemiE) classifier, which is a combination of eager learning and lazy learning technique, is used to get robust classification of malware families. The outcome of the experiment is promising since the proposed technique requires less training time to learn the semantics of higher-level malicious behaviours. Identifying the malware (testing phase) is also done faster. A benchmark database like malimg and Microsoft Malware Classification challenge (BIG-2015) has been utilized to analyse the performance of the system. An overall average classification accuracy of 99.03 and 99.11% is achieved, respectively

    Advanced framework for epilepsy detection through image-based EEG signal analysis

    Get PDF
    BackgroundRecurrent and unpredictable seizures characterize epilepsy, a neurological disorder affecting millions worldwide. Epilepsy diagnosis is crucial for timely treatment and better outcomes. Electroencephalography (EEG) time-series data analysis is essential for epilepsy diagnosis and surveillance. Complex signal processing methods used in traditional EEG analysis are computationally demanding and difficult to generalize across patients. Researchers are using machine learning to improve epilepsy detection, particularly visual feature extraction from EEG time-series data.ObjectiveThis study examines the application of a Gramian Angular Summation Field (GASF) approach for the analysis of EEG signals. Additionally, it explores the utilization of image features, specifically the Scale-Invariant Feature Transform (SIFT) and Oriented FAST and Rotated BRIEF (ORB) techniques, for the purpose of epilepsy detection in EEG data.MethodsThe proposed methodology encompasses the transformation of EEG signals into images based on GASF, followed by the extraction of features utilizing SIFT and ORB techniques, and ultimately, the selection of relevant features. A state-of-the-art machine learning classifier is employed to classify GASF images into two categories: normal EEG patterns and focal EEG patterns. Bern-Barcelona EEG recordings were used to test the proposed method.ResultsThis method classifies EEG signals with 96% accuracy using SIFT features and 94% using ORB features. The Random Forest (RF) classifier surpasses state-of-the-art approaches in precision, recall, F1-score, specificity, and Area Under Curve (AUC). The Receiver Operating Characteristic (ROC) curve shows that Random Forest outperforms Support Vector Machine (SVM) and k-Nearest Neighbors (k-NN) classifiers.SignificanceThe suggested method has many advantages over time-series EEG data analysis and machine learning classifiers used in epilepsy detection studies. A novel image-based preprocessing pipeline using GASF for robust image synthesis and SIFT and ORB for feature extraction is presented here. The study found that the suggested method can accurately discriminate between normal and focal EEG signals, improving patient outcomes through early and accurate epilepsy diagnosis

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Lip2Speech : lightweight multi-speaker speech reconstruction with Gabor features

    Get PDF
    In environments characterised by noise or the absence of audio signals, visual cues, notably facial and lip movements, serve as valuable substitutes for missing or corrupted speech signals. In these scenarios, speech reconstruction can potentially generate speech from visual data. Recent advancements in this domain have predominantly relied on end-to-end deep learning models, like Convolutional Neural Networks (CNN) or Generative Adversarial Networks (GAN). However, these models are encumbered by their intricate and opaque architectures, coupled with their lack of speaker independence. Consequently, achieving multi-speaker speech reconstruction without supplementary information is challenging. This research introduces an innovative Gabor-based speech reconstruction system tailored for lightweight and efficient multi-speaker speech restoration. Using our Gabor feature extraction technique, we propose two novel models: GaborCNN2Speech and GaborFea2Speech. These models employ a rapid Gabor feature extraction method to derive lowdimensional mouth region features, encompassing filtered Gabor mouth images and low-dimensional Gabor features as visual inputs. An encoded spectrogram serves as the audio target, and a Long Short-Term Memory (LSTM)-based model is harnessed to generate coherent speech output. Through comprehensive experiments conducted on the GRID corpus, our proposed Gabor-based models have showcased superior performance in sentence and vocabulary reconstruction when compared to traditional end-to-end CNN models. These models stand out for their lightweight design and rapid processing capabilities. Notably, the GaborFea2Speech model presented in this study achieves robust multi-speaker speech reconstruction without necessitating supplementary information, thereby marking a significant milestone in the field of speech reconstruction

    Land use classification in mine-agriculture compound area based on multi-feature random forest: a case study of Peixian

    Get PDF
    IntroductionLand use classification plays a critical role in analyzing land use/cover change (LUCC). Remote sensing land use classification based on machine learning algorithm is one of the hot spots in current remote sensing technology research. The diversity of surface objects and the complexity of their distribution in mixed mining and agricultural areas have brought challenges to the classification of traditional remote sensing images, and the rich information contained in remote sensing images has not been fully utilized.MethodsA quantitative difference index was proposed quantify and select the texture features of easily confused land types, and a random forest (RF) classification method with multi-feature combination classification schemes for remote sensing images was developed, and land use information of the mine-agriculture compound area of Peixian in Xuzhou, China was extracted.ResultsThe quantitative difference index proved effective in reducing the dimensionality of feature parameters and resulted in a reduction of the optimal feature scheme dimension from 57 to 22. Among the four classification methods based on the optimal feature classification scheme, the RF algorithm emerged as the most efficient with a classification accuracy of 92.38% and a Kappa coefficient of 0.90, which outperformed the support vector machine (SVM), classification and regression tree (CART), and neural network (NN) algorithm.ConclusionThe findings indicate that the quantitative differential index is a novel and effective approach for discerning distinct texture features among various land types. It plays a crucial role in the selection and optimization of texture features in multispectral remote sensing imagery. Random forest (RF) classification method, leveraging a multi-feature combination, provides a fresh method support for the precise classification of intricate ground objects within the mine-agriculture compound area

    Fast Learning Radiance Fields by Shooting Much Fewer Rays

    Full text link
    Learning radiance fields has shown remarkable results for novel view synthesis. The learning procedure usually costs lots of time, which motivates the latest methods to speed up the learning procedure by learning without neural networks or using more efficient data structures. However, these specially designed approaches do not work for most of radiance fields based methods. To resolve this issue, we introduce a general strategy to speed up the learning procedure for almost all radiance fields based methods. Our key idea is to reduce the redundancy by shooting much fewer rays in the multi-view volume rendering procedure which is the base for almost all radiance fields based methods. We find that shooting rays at pixels with dramatic color change not only significantly reduces the training burden but also barely affects the accuracy of the learned radiance fields. In addition, we also adaptively subdivide each view into a quadtree according to the average rendering error in each node in the tree, which makes us dynamically shoot more rays in more complex regions with larger rendering error. We evaluate our method with different radiance fields based methods under the widely used benchmarks. Experimental results show that our method achieves comparable accuracy to the state-of-the-art with much faster training.Comment: Accepted by lEEE Transactions on lmage Processing 2023. Project Page: https://zparquet.github.io/Fast-Learning . Code: https://github.com/zParquet/Fast-Learnin

    DreamEditor: Text-Driven 3D Scene Editing with Neural Fields

    Full text link
    Neural fields have achieved impressive advancements in view synthesis and scene reconstruction. However, editing these neural fields remains challenging due to the implicit encoding of geometry and texture information. In this paper, we propose DreamEditor, a novel framework that enables users to perform controlled editing of neural fields using text prompts. By representing scenes as mesh-based neural fields, DreamEditor allows localized editing within specific regions. DreamEditor utilizes the text encoder of a pretrained text-to-Image diffusion model to automatically identify the regions to be edited based on the semantics of the text prompts. Subsequently, DreamEditor optimizes the editing region and aligns its geometry and texture with the text prompts through score distillation sampling [29]. Extensive experiments have demonstrated that DreamEditor can accurately edit neural fields of real-world scenes according to the given text prompts while ensuring consistency in irrelevant areas. DreamEditor generates highly realistic textures and geometry, significantly surpassing previous works in both quantitative and qualitative evaluations

    Recognition of approximate motions of human based on micro-doppler features

    Get PDF
    With the aging society approaching, the daily safety of the elderly has attracted more and more people’s attention, especially the elderly who live alone without anyone to take care of them. How to judge whether there is danger from the daily behavior of those who live alone and providing timely assistance is a research topic of health protection in the field of smart home care. Modern radar sensing technology and intelligent perception technology adopted in the domain of health care have started to gain significant interest, which can replace wearable sensors or cameras without feeling uncomfortable or having privacy issues. In this paper, some preliminary results were presented to develop an indoor monitoring system for human activity recognition using micro-Doppler features of radar signals. The experimental campaign involved five different with some similar activities of 10 subjects to be collected using a Doppler radar. A total of several texture features and numerical features are extracted from the spectrogram and time-frequency domain of measurements and applied three machine learning (ML) algorithms such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Decision Tree (D-Tree) for the precise classification of all five different motions. The results illustrated that the SVM classifier using the Radial Basis Function (RBF) kernel reaches the best results for all three feature sets, and the classification of mixed features can reach 93.2%, as well as 88.4% and 92.4% for numerical and texture features. The performance of SVM is 1.6% better than KNN and 3.2% better than D-Tree

    Impact of Wavelet Kernels on Predictive Capability of Radiomic Features: A Case Study on COVID-19 Chest X-ray Images

    Get PDF
    Radiomic analysis allows for the detection of imaging biomarkers supporting decision-making processes in clinical environments, from diagnosis to prognosis. Frequently, the original set of radiomic features is augmented by considering high-level features, such as wavelet transforms. However, several wavelets families (so called kernels) are able to generate different multi-resolution representations of the original image, and which of them produces more salient images is not yet clear. In this study, an in-depth analysis is performed by comparing different wavelet kernels and by evaluating their impact on predictive capabilities of radiomic models. A dataset composed of 1589 chest X-ray images was used for COVID-19 prognosis prediction as a case study. Random forest, support vector machine, and XGBoost were trained (on a subset of 1103 images) after a rigorous feature selection strategy to build-up the predictive models. Next, to evaluate the models generalization capability on unseen data, a test phase was performed (on a subset of 486 images). The experimental findings showed that Bior1.5, Coif1, Haar, and Sym2 kernels guarantee better and similar performance for all three machine learning models considered. Support vector machine and random forest showed comparable performance, and they were better than XGBoost. Additionally, random forest proved to be the most stable model, ensuring an appropriate balance between sensitivity and specificity

    Using hydrological models and digital soil mapping for the assessment and management of catchments: A case study of the Nyangores and Ruiru catchments in Kenya (East Africa)

    Get PDF
    Human activities on land have a direct and cumulative impact on water and other natural resources within a catchment. This land-use change can have hydrological consequences on the local and regional scales. Sound catchment assessment is not only critical to understanding processes and functions but also important in identifying priority management areas. The overarching goal of this doctoral thesis was to design a methodological framework for catchment assessment (dependent upon data availability) and propose practical catchment management strategies for sustainable water resources management. The Nyangores and Ruiru reservoir catchments located in Kenya, East Africa were used as case studies. A properly calibrated Soil and Water Assessment Tool (SWAT) hydrologic model coupled with a generic land-use optimization tool (Constrained Multi-Objective Optimization of Land-use Allocation-CoMOLA) was applied to identify and quantify functional trade-offs between environmental sustainability and food production in the ‘data-available’ Nyangores catchment. This was determined using a four-dimension objective function defined as (i) minimizing sediment load, (ii) maximizing stream low flow and (iii and iv) maximizing the crop yields of maize and soybeans, respectively. Additionally, three different optimization scenarios, represented as i.) agroforestry (Scenario 1), ii.) agroforestry + conservation agriculture (Scenario 2) and iii.) conservation agriculture (Scenario 3), were compared. For the data-scarce Ruiru reservoir catchment, alternative methods using digital soil mapping of soil erosion proxies (aggregate stability using Mean Weight Diameter) and spatial-temporal soil loss analysis using empirical models (the Revised Universal Soil Loss Equation-RUSLE) were used. The lack of adequate data necessitated a data-collection phase which implemented the conditional Latin Hypercube Sampling. This sampling technique reduced the need for intensive soil sampling while still capturing spatial variability. The results revealed that for the Nyangores catchment, adoption of both agroforestry and conservation agriculture (Scenario 2) led to the smallest trade-off amongst the different objectives i.e. a 3.6% change in forests combined with 35% change in conservation agriculture resulted in the largest reduction in sediment loads (78%), increased low flow (+14%) and only slightly decreased crop yields (3.8% for both maize and soybeans). Therefore, the advanced use of hydrologic models with optimization tools allows for the simultaneous assessment of different outputs/objectives and is ideal for areas with adequate data to properly calibrate the model. For the Ruiru reservoir catchment, digital soil mapping (DSM) of aggregate stability revealed that susceptibility to erosion exists for cropland (food crops), tea and roadsides, which are mainly located in the eastern part of the catchment, as well as deforested areas on the western side. This validated that with limited soil samples and the use of computing power, machine learning and freely available covariates, DSM can effectively be applied in data-scarce areas. Moreover, uncertainty in the predictions can be incorporated using prediction intervals. The spatial-temporal analysis exhibited that bare land (which has the lowest areal proportion) was the largest contributor to erosion. Two peak soil loss periods corresponding to the two rainy periods of March–May and October–December were identified. Thus, yearly soil erosion risk maps misrepresent the true dimensions of soil loss with averages disguising areas of low and high potential. Also, a small portion of the catchment can be responsible for a large proportion of the total erosion. For both catchments, agroforestry (combining both the use of trees and conservation farming) is the most feasible catchment management strategy (CMS) for solving the major water quantity and quality problems. Finally, the key to thriving catchments aiming at both sustainability and resilience requires urgent collaborative action by all stakeholders. The necessary stakeholders in both Nyangores and Ruiru reservoir catchments must be involved in catchment assessment in order to identify the catchment problems, mitigation strategies/roles and responsibilities while keeping in mind that some risks need to be shared and negotiated, but so will the benefits.:TABLE OF CONTENTS DECLARATION OF CONFORMITY........................................................................ i DECLARATION OF INDEPENDENT WORK AND CONSENT ............................. ii LIST OF PAPERS ................................................................................................. iii ACKNOWLEDGEMENTS ..................................................................................... iv THESIS AT A GLANCE ......................................................................................... v SUMMARY ............................................................................................................ vi List of Figures......................................................................................................... x List of Tables........................................................................................................... x ABBREVIATION..................................................................................................... xi PART A: SYNTHESIS 1. INTRODUCTION ............................................................................................... 1 1.1 Catchment management ...................................................................................1 1.2 Tools to support catchment assessment and management ..............................4 1.3 Catchment management strategies (CMSs)......................................................9 1.4 Concept and research objectives.......................................................................11 2. MATERIAL AND METHODS................................................................................15 2.1. STUDY AREA ..................................................................................................15 2.1.1. Nyangores catchment ...................................................................................15 2.1.2. Ruiru reservoir catchment .............................................................................17 2.2. Using SWAT conceptual model and land-use optimization ..............................19 2.3. Using soil erosion proxies and empirical models ..............................................21 3. RESULTS AND DISCUSSION..............................................................................24 3.1. Assessing multi-metric calibration performance using the SWAT model...........25 3.2. Land-use optimization using SWAT-CoMOLA for the Nyangores catchment. ..26 3.3. Digital soil mapping of soil aggregate stability ..................................................28 3.4. Spatio-temporal analysis using the revised universal soil loss equation (RUSLE) 29 4. CRITICAL ASSESSMENT OF THE METHODS USED ......................................31 4.1. Assessing suitability of data for modelling and overcoming data challenges...31 4.2. Selecting catchment management strategies based on catchment assessment . 35 5. CONCLUSION AND RECOMMENDATIONS ....................................................36 6. REFERENCES ............................ .....................................................................38 PART B: PAPERS PAPER I .................................................................................................................47 PAPER II ................................................................................................................59 PAPER III ...............................................................................................................74 PAPER IV ...............................................................................................................8
    • …
    corecore