4,508 research outputs found

    GRASE: Granulometry Analysis with Semi Eager Classifier to Detect Malware

    Get PDF
    Technological advancement in communication leading to 5G, motivates everyone to get connected to the internet including ‘Devices’, a technology named Web of Things (WoT). The community benefits from this large-scale network which allows monitoring and controlling of physical devices. But many times, it costs the security as MALicious softWARE (MalWare) developers try to invade the network, as for them, these devices are like a ‘backdoor’ providing them easy ‘entry’. To stop invaders from entering the network, identifying malware and its variants is of great significance for cyberspace. Traditional methods of malware detection like static and dynamic ones, detect the malware but lack against new techniques used by malware developers like obfuscation, polymorphism and encryption. A machine learning approach to detect malware, where the classifier is trained with handcrafted features, is not potent against these techniques and asks for efforts to put in for the feature engineering. The paper proposes a malware classification using a visualization methodology wherein the disassembled malware code is transformed into grey images. It presents the efficacy of Granulometry texture analysis technique for improving malware classification. Furthermore, a Semi Eager (SemiE) classifier, which is a combination of eager learning and lazy learning technique, is used to get robust classification of malware families. The outcome of the experiment is promising since the proposed technique requires less training time to learn the semantics of higher-level malicious behaviours. Identifying the malware (testing phase) is also done faster. A benchmark database like malimg and Microsoft Malware Classification challenge (BIG-2015) has been utilized to analyse the performance of the system. An overall average classification accuracy of 99.03 and 99.11% is achieved, respectively

    Design and Evaluation of a Hardware System for Online Signal Processing within Mobile Brain-Computer Interfaces

    Get PDF
    Brain-Computer Interfaces (BCIs) sind innovative Systeme, die eine direkte Kommunikation zwischen dem Gehirn und externen GerĂ€ten ermöglichen. Diese Schnittstellen haben sich zu einer transformativen Lösung nicht nur fĂŒr Menschen mit neurologischen Verletzungen entwickelt, sondern auch fĂŒr ein breiteres Spektrum von Menschen, das sowohl medizinische als auch nicht-medizinische Anwendungen umfasst. In der Vergangenheit hat die Herausforderung, dass neurologische Verletzungen nach einer anfĂ€nglichen Erholungsphase statisch bleiben, die Forscher dazu veranlasst, innovative Wege zu beschreiten. Seit den 1970er Jahren stehen BCIs an vorderster Front dieser BemĂŒhungen. Mit den Fortschritten in der Forschung haben sich die BCI-Anwendungen erweitert und zeigen ein großes Potenzial fĂŒr eine Vielzahl von Anwendungen, auch fĂŒr weniger stark eingeschrĂ€nkte (zum Beispiel im Kontext von Hörelektronik) sowie völlig gesunde Menschen (zum Beispiel in der Unterhaltungsindustrie). Die Zukunft der BCI-Forschung hĂ€ngt jedoch auch von der VerfĂŒgbarkeit zuverlĂ€ssiger BCI-Hardware ab, die den Einsatz in der realen Welt gewĂ€hrleistet. Das im Rahmen dieser Arbeit konzipierte und implementierte CereBridge-System stellt einen bedeutenden Fortschritt in der Brain-Computer-Interface-Technologie dar, da es die gesamte Hardware zur Erfassung und Verarbeitung von EEG-Signalen in ein mobiles System integriert. Die Architektur der Verarbeitungshardware basiert auf einem FPGA mit einem ARM Cortex-M3 innerhalb eines heterogenen ICs, was FlexibilitĂ€t und Effizienz bei der EEG-Signalverarbeitung gewĂ€hrleistet. Der modulare Aufbau des Systems, bestehend aus drei einzelnen Boards, gewĂ€hrleistet die Anpassbarkeit an unterschiedliche Anforderungen. Das komplette System wird an der Kopfhaut befestigt, kann autonom arbeiten, benötigt keine externe Interaktion und wiegt einschließlich der 16-Kanal-EEG-Sensoren nur ca. 56 g. Der Fokus liegt auf voller MobilitĂ€t. Das vorgeschlagene anpassbare Datenflusskonzept erleichtert die Untersuchung und nahtlose Integration von Algorithmen und erhöht die FlexibilitĂ€t des Systems. Dies wird auch durch die Möglichkeit unterstrichen, verschiedene Algorithmen auf EEG-Daten anzuwenden, um unterschiedliche Anwendungsziele zu erreichen. High-Level Synthesis (HLS) wurde verwendet, um die Algorithmen auf das FPGA zu portieren, was den Algorithmenentwicklungsprozess beschleunigt und eine schnelle Implementierung von Algorithmusvarianten ermöglicht. Evaluierungen haben gezeigt, dass das CereBridge-System in der Lage ist, die gesamte Signalverarbeitungskette zu integrieren, die fĂŒr verschiedene BCI-Anwendungen erforderlich ist. DarĂŒber hinaus kann es mit einer Batterie von mehr als 31 Stunden Dauerbetrieb betrieben werden, was es zu einer praktikablen Lösung fĂŒr mobile Langzeit-EEG-Aufzeichnungen und reale BCI-Studien macht. Im Vergleich zu bestehenden Forschungsplattformen bietet das CereBridge-System eine bisher unerreichte LeistungsfĂ€higkeit und Ausstattung fĂŒr ein mobiles BCI. Es erfĂŒllt nicht nur die relevanten Anforderungen an ein mobiles BCI-System, sondern ebnet auch den Weg fĂŒr eine schnelle Übertragung von Algorithmen aus dem Labor in reale Anwendungen. Im Wesentlichen liefert diese Arbeit einen umfassenden Entwurf fĂŒr die Entwicklung und Implementierung eines hochmodernen mobilen EEG-basierten BCI-Systems und setzt damit einen neuen Standard fĂŒr BCI-Hardware, die in der Praxis eingesetzt werden kann.Brain-Computer Interfaces (BCIs) are innovative systems that enable direct communication between the brain and external devices. These interfaces have emerged as a transformative solution not only for individuals with neurological injuries, but also for a broader range of individuals, encompassing both medical and non-medical applications. Historically, the challenge of neurological injury being static after an initial recovery phase has driven researchers to explore innovative avenues. Since the 1970s, BCIs have been at one forefront of these efforts. As research has progressed, BCI applications have expanded, showing potential in a wide range of applications, including those for less severely disabled (e.g. in the context of hearing aids) and completely healthy individuals (e.g. entertainment industry). However, the future of BCI research also depends on the availability of reliable BCI hardware to ensure real-world application. The CereBridge system designed and implemented in this work represents a significant leap forward in brain-computer interface technology by integrating all EEG signal acquisition and processing hardware into a mobile system. The processing hardware architecture is centered around an FPGA with an ARM Cortex-M3 within a heterogeneous IC, ensuring flexibility and efficiency in EEG signal processing. The modular design of the system, consisting of three individual boards, ensures adaptability to different requirements. With a focus on full mobility, the complete system is mounted on the scalp, can operate autonomously, requires no external interaction, and weighs approximately 56g, including 16 channel EEG sensors. The proposed customizable dataflow concept facilitates the exploration and seamless integration of algorithms, increasing the flexibility of the system. This is further underscored by the ability to apply different algorithms to recorded EEG data to meet different application goals. High-Level Synthesis (HLS) was used to port algorithms to the FPGA, accelerating the algorithm development process and facilitating rapid implementation of algorithm variants. Evaluations have shown that the CereBridge system is capable of integrating the complete signal processing chain required for various BCI applications. Furthermore, it can operate continuously for more than 31 hours with a 1800mAh battery, making it a viable solution for long-term mobile EEG recording and real-world BCI studies. Compared to existing research platforms, the CereBridge system offers unprecedented performance and features for a mobile BCI. It not only meets the relevant requirements for a mobile BCI system, but also paves the way for the rapid transition of algorithms from the laboratory to real-world applications. In essence, this work provides a comprehensive blueprint for the development and implementation of a state-of-the-art mobile EEG-based BCI system, setting a new benchmark in BCI hardware for real-world applicability

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Deep learning model based on multi-scale feature fusion for precipitation nowcasting

    Get PDF
    Forecasting heavy precipitation accurately is a challenging task for most deep learning (DL)-based models. To address this, we present a novel DL architecture called “multi-scale feature fusion” (MFF) that can forecast precipitation with a lead time of up to 3 h. The MFF model uses convolution kernels with varying sizes to create multi-scale receptive fields. This helps to capture the movement features of precipitation systems, such as their shape, movement direction, and speed. Additionally, the architecture utilizes the mechanism of discrete probability to reduce uncertainties and forecast errors, enabling it to predict heavy precipitation even at longer lead times. For model training, we use 4 years of radar echo data from 2018 to 2021 and 1 year of data from 2022 for model testing. We compare the MFF model with three existing extrapolative models: time series residual convolution (TSRC), optical flow (OF), and UNet. The results show that MFF achieves superior forecast skills with high probability of detection (POD), low false alarm rate (FAR), small mean absolute error (MAE), and high structural similarity index (SSIM). Notably, MFF can predict high-intensity precipitation fields at 3 h lead time, while the other three models cannot. Furthermore, MFF shows improvement in the smoothing effect of the forecast field, as observed from the results of radially averaged power spectral (RAPS). Our future work will focus on incorporating multi-source meteorological variables, making structural adjustments to the network, and combining them with numerical models to further improve the forecast skills of heavy precipitations at longer lead times.</p

    Southern Adventist University Undergraduate Catalog 2023-2024

    Get PDF
    Southern Adventist University\u27s undergraduate catalog for the academic year 2023-2024.https://knowledge.e.southern.edu/undergrad_catalog/1123/thumbnail.jp

    Evaluation of Data Processing and Artifact Removal Approaches Used for Physiological Signals Captured Using Wearable Sensing Devices during Construction Tasks

    Get PDF
    Wearable sensing devices (WSDs) have enormous promise for monitoring construction worker safety. They can track workers and send safety-related information in real time, allowing for more effective and preventative decision making. WSDs are particularly useful on construction sites since they can track workers’ health, safety, and activity levels, among other metrics that could help optimize their daily tasks. WSDs may also assist workers in recognizing health-related safety risks (such as physical fatigue) and taking appropriate action to mitigate them. The data produced by these WSDs, however, is highly noisy and contaminated with artifacts that could have been introduced by the surroundings, the experimental apparatus, or the subject’s physiological state. These artifacts are very strong and frequently found during field experiments. So, when there is a lot of artifacts, the signal quality drops. Recently, artifacts removal has been greatly enhanced by developments in signal processing, which has vastly enhanced the performance. Thus, the proposed review aimed to provide an in-depth analysis of the approaches currently used to analyze data and remove artifacts from physiological signals obtained via WSDs during construction-related tasks. First, this study provides an overview of the physiological signals that are likely to be recorded from construction workers to monitor their health and safety. Second, this review identifies the most prevalent artifacts that have the most detrimental effect on the utility of the signals. Third, a comprehensive review of existing artifact-removal approaches were presented. Fourth, each identified artifact detection and removal approach was analyzed for its strengths and weaknesses. Finally, in conclusion, this review provides a few suggestions for future research for improving the quality of captured physiological signals for monitoring the health and safety of construction workers using artifact removal approaches

    Energy storage design and integration in power systems by system-value optimization

    Get PDF
    Energy storage can play a crucial role in decarbonising power systems by balancing power and energy in time. Wider power system benefits that arise from these balancing technologies include lower grid expansion, renewable curtailment, and average electricity costs. However, with the proliferation of new energy storage technologies, it becomes increasingly difficult to identify which technologies are economically viable and how to design and integrate them effectively. Using large-scale energy system models in Europe, the dissertation shows that solely relying on Levelized Cost of Storage (LCOS) metrics for technology assessments can mislead and that traditional system-value methods raise important questions about how to assess multiple energy storage technologies. Further, the work introduces a new complementary system-value assessment method called the market-potential method, which provides a systematic deployment analysis for assessing multiple storage technologies under competition. However, integrating energy storage in system models can lead to the unintended storage cycling effect, which occurs in approximately two-thirds of models and significantly distorts results. The thesis finds that traditional approaches to deal with the issue, such as multi-stage optimization or mixed integer linear programming approaches, are either ineffective or computationally inefficient. A new approach is suggested that only requires appropriate model parameterization with variable costs while keeping the model convex to reduce the risk of misleading results. In addition, to enable energy storage assessments and energy system research around the world, the thesis extended the geographical scope of an existing European opensource model to global coverage. The new build energy system model ‘PyPSA-Earth’ is thereby demonstrated and validated in Africa. Using PyPSA-Earth, the thesis assesses for the first time the system value of 20 energy storage technologies across multiple scenarios in a representative future power system in Africa. The results offer insights into approaches for assessing multiple energy storage technologies under competition in large-scale energy system models. In particular, the dissertation addresses extreme cost uncertainty through a comprehensive scenario tree and finds that, apart from lithium and hydrogen, only seven energy storage are optimizationrelevant technologies. The work also discovers that a heterogeneous storage design can increase power system benefits and that some energy storage are more important than others. Finally, in contrast to traditional methods that only consider single energy storage, the thesis finds that optimizing multiple energy storage options tends to significantly reduce total system costs by up to 29%. The presented research findings have the potential to inform decision-making processes for the sizing, integration, and deployment of energy storage systems in decarbonized power systems, contributing to a paradigm shift in scientific methodology and advancing efforts towards a sustainable future

    Exploiting the Quantum Advantage for Satellite Image Processing: Review and Assessment

    Get PDF
    This article examines the current status of quantum computing in Earth observation (EO) and satellite imagery. We analyze the potential limitations and applications of quantum learning models when dealing with satellite data, considering the persistent challenges of profiting from quantum advantage and finding the optimal sharing between high-performance computing (HPC) and quantum computing (QC). We then assess some parameterized quantum circuit models transpiled into a Clifford+T universal gate set. The T-gates shed light on the quantum resources required to deploy quantum models, either on an HPC system or several QC systems. In particular, if the T-gates cannot be simulated efficiently on an HPC system, we can apply a quantum computer and its computational power over conventional techniques. Our quantum resource estimation showed that quantum machine learning (QML) models, with a sufficient number of T-gates, provide the quantum advantage if and only if they generalize on unseen data points better than their classical counterparts deployed on the HPC system and they break the symmetry in their weights at each learning iteration like in conventional deep neural networks. We also estimated the quantum resources required for some QML models as an initial innovation. Lastly, we defined the optimal sharing between an HPC+QC system for executing QML models for hyperspectral satellite images. These are a unique dataset compared to other satellite images since they have a limited number of input qubits and a small number of labeled benchmark images, making them less challenging to deploy on quantum computers

    Learning Expressive Prompting With Residuals for Vision Transformers

    Full text link
    Prompt learning is an efficient approach to adapt transformers by inserting learnable set of parameters into the input and intermediate representations of a pre-trained model. In this work, we present Expressive Prompts with Residuals (EXPRES) which modifies the prompt learning paradigm specifically for effective adaptation of vision transformers (ViT). Out method constructs downstream representations via learnable ``output'' tokens, that are akin to the learned class tokens of the ViT. Further for better steering of the downstream representation processed by the frozen transformer, we introduce residual learnable tokens that are added to the output of various computations. We apply EXPRES for image classification, few shot learning, and semantic segmentation, and show our method is capable of achieving state of the art prompt tuning on 3/3 categories of the VTAB benchmark. In addition to strong performance, we observe that our approach is an order of magnitude more prompt efficient than existing visual prompting baselines. We analytically show the computational benefits of our approach over weight space adaptation techniques like finetuning. Lastly we systematically corroborate the architectural design of our method via a series of ablation experiments.Comment: Accepted at CVPR (2023
    • 

    corecore