1,270 research outputs found
Adaptive heterogeneous parallelism for semi-empirical lattice dynamics in computational materials science.
With the variability in performance of the multitude of parallel environments available today, the conceptual overhead created by the need to anticipate runtime information to make design-time decisions has become overwhelming. Performance-critical applications and libraries carry implicit assumptions based on incidental metrics that are not portable to emerging computational platforms or even alternative contemporary architectures. Furthermore, the significance of runtime concerns such as makespan, energy efficiency and fault tolerance depends on the situational context. This thesis presents a case study in the application of both Mattsons prescriptive pattern-oriented approach and the more principled structured parallelism formalism to the computational simulation of inelastic neutron scattering spectra on hybrid CPU/GPU platforms. The original ad hoc implementation as well as new patternbased and structured implementations are evaluated for relative performance and scalability. Two new structural abstractions are introduced to facilitate adaptation by lazy optimisation and runtime feedback. A deferred-choice abstraction represents a unified space of alternative structural program variants, allowing static adaptation through model-specific exhaustive calibration with regards to the extrafunctional concerns of runtime, average instantaneous power and total energy usage. Instrumented queues serve as mechanism for structural composition and provide a representation of extrafunctional state that allows realisation of a market-based decentralised coordination heuristic for competitive resource allocation and the Lyapunov drift algorithm for cooperative scheduling
Ultrasound Brain Tomography:Comparison of Deep Learning and Deterministic Methods
— The general purpose of this document is to develop a lightweight, portable ultrasound computer tomography (USCT) system that enables noninvasive imaging of the inside of the human head with high resolution. The goal is to analyze the benefits of using a deep neural network containing convolutional neural network (CNN) and long short-term memory (LSTM) layers compared to deterministic methods. In addition to the CNN + LSTM and LSTM networks, the following methods were used to create tomographic images of the inside of the human head: truncated singular value decomposition (TSVD), linear backprojection (LB), Gauss–Newton (GN) with regularization matrix, Tikhonov regularization (TR), and Levenberg–Marquardt (LM). A physical model of the human head was made. Based on synthetic and real measurements, images of the inside of the brain were reconstructed. On this basis, the CNN + LSTM and LSTM methods were compared with deterministic methods. Based on the comparison of images and quantitative indicators, it was found that the proposed neural network is much more tolerant of noisy and nonideal synthetic data measurements, which is manifested in the lack of the need to apply filters to the obtained images. An important finding confirmed by hard evidence is the confirmation of the greater usefulness of neural models in medical ultrasound tomography, which results from the generalization abilities of the deep hybrid neural network. At the same time, research has shown a deficit of these abilities in deterministic methods. Considering the human head’s specificity, using hybrid neural networks containing both CNN and LSTM layers in clinical trials is a better choice than deterministic methods.</p
Optimising a defence-aware threat modelling diagram incorporating a defence-in-depth approach for the internet-of-things
Modern technology has proliferated into just about every aspect of life while improving the quality of life. For instance, IoT technology has significantly improved over traditional systems, providing easy life, time-saving, financial saving, and security aspects. However, security weaknesses associated with IoT technology can pose a significant threat to the human factor. For instance, smart doorbells can make household life easier, save time, save money, and provide surveillance security. Nevertheless, the security weaknesses in smart doorbells could be exposed to a criminal and pose a danger to the life and money of the household. In addition, IoT technology is constantly advancing and expanding and rapidly becoming ubiquitous in modern society. In that case, increased usage and technological advancement create security weaknesses that attract cybercriminals looking to satisfy their agendas.
Perfect security solutions do not exist in the real world because modern systems are continuously improving, and intruders frequently attempt various techniques to discover security flaws and bypass existing security control in modern systems. In that case, threat modelling is a great starting point in understanding the threat landscape of the system and its weaknesses. Therefore, the threat modelling field in computer science was significantly improved by implementing various frameworks to identify threats and address them to mitigate them. However, most mature threat modelling frameworks are implemented for traditional IT systems that only consider software-related weaknesses and do not address the physical attributes. This approach may not be practical for IoT technology because it inherits software and physical security weaknesses. However, scholars employed mature threat modelling frameworks such as STRIDE on IoT technology because mature frameworks still include security concepts that are significant for modern technology. Therefore, mature frameworks cannot be ignored but are not efficient in addressing the threat associated with modern systems.
As a solution, this research study aims to extract the significant security concept of matured threat modelling frameworks and utilise them to implement robust IoT threat modelling frameworks. This study selected fifteen threat modelling frameworks from among researchers and the defence-in-depth security concept to extract threat modelling techniques. Subsequently, this research study conducted three independent reviews to discover valuable threat modelling concepts and their usefulness for IoT technology. The first study deduced that integration of threat modelling approach software-centric, asset-centric, attacker-centric and data-centric with defence-in-depth is valuable and delivers distinct benefits. As a result, PASTA and TRIKE demonstrated four threat modelling approaches based on a classification scheme. The second study deduced the features of a threat modelling framework that achieves a high satisfaction level toward defence-in-depth security architecture. Under evaluation criteria, the PASTA framework scored the highest satisfaction value. Finally, the third study deduced IoT systematic threat modelling techniques based on recent research studies. As a result, the STRIDE framework was identified as the most popular framework, and other frameworks demonstrated effective capabilities valuable to IoT technology.
Respectively, this study introduced Defence-aware Threat Modelling (DATM), an IoT threat modelling framework based on the findings of threat modelling and defence-in-depth security concepts. The steps involved with the DATM framework are further described with figures for better understatement. Subsequently, a smart doorbell case study is considered for threat modelling using the DATM framework for validation. Furthermore, the outcome of the case study was further assessed with the findings of three research studies and validated the DATM framework. Moreover, the outcome of this thesis is helpful for researchers who want to conduct threat modelling in IoT environments and design a novel threat modelling framework suitable for IoT technology
Recommended from our members
Knowledge Discovery and Data Mining for Shared Mobility and Connected and Automated Vehicle Applications
The rapid development of shared mobility and connected and automated vehicles (CAVs) has not only brought new intelligent transportation system (ITS) challenges with the new types of mobility, but also brought a huge opportunity to accelerate the connectivity and informatization of transportation systems, particularly when we consider all the new forms of data that is becoming available. The primary challenge is how to take advantage of the enormous amount of data to discover knowledge, build effective models, and develop impactful applications. With the theoretical and experimental progress being made over the last two decades, data mining and machine learning technologies have become key approaches for parsing data, understanding information, and making informed decisions, especially as the rise of deep learning algorithms bringing new levels of performance to the analysis of large datasets. The combination of data mining and ITS can greatly benefit research and advances in shared mobility and CAVs.This dissertation focuses on knowledge discovery and data mining for shared mobility and CAV applications. When considering big data associated with shared mobility operations and CAV research, data mining techniques can be customized with transportation knowledge to initially parse the data. Then machine learning methods can be used to model the parsed data to elicit hidden knowledge. Finally, the discovered knowledge and extracted information can help in the development of effective shared mobility and CAV applications to achieve the goals of a safer, faster, and more eco-friendly transportation systems.In this dissertation, there are four main sections that are addressed. First, new methodologies are introduced for extracting lane-level road features from rough crowdsourced GPS trajectories via data mining, which is subsequently used as the fundamental information for CAV applications. The proposed method results in decimeter level accuracy, which satisfies the positioning needs for many macroscopic and microscopic shared mobility and CAV applications. Second, macroscopic ride-hailing service big data has been analyzed for demand prediction, vehicle operation, and system efficiency monitoring. The proposed deep learning algorithms increase the ride-hailing demand prediction accuracy to 80% and can help the fleet dispatching system reduce 30% of vacant travel distance. Third, microscopic automated vehicle perception data has been analyzed for a real-time computer vision system that can be used for lane change behavior detection. The proposed deep learning design combines the residual neural network image input with time serious control data and reaches 95% of lane change behavior prediction accuracy. Last but not least, new ride sharing and CAV applications have been simulated in a behavior modeling framework to analyze the impact of mobility and energy consumption, which addresses key barriers by quantifying the transportation system-wide mobility, energy and behavior impacts from new mobility technologies using real-world data
Massively parallel split-step Fourier techniques for simulating quantum systems on graphics processing units
The split-step Fourier method is a powerful technique for solving partial differential equations and simulating ultracold atomic systems of various forms. In this body of work, we focus on several variations of this method to allow for simulations of one, two, and three-dimensional quantum systems, along with several notable methods for controlling these systems. In particular, we use quantum optimal control and shortcuts to adiabaticity to study the non-adiabatic generation of superposition states in strongly correlated one-dimensional systems, analyze chaotic vortex trajectories in two dimensions by using rotation and phase imprinting methods, and create stable, threedimensional vortex structures in Bose–Einstein condensates through artificial magnetic fields generated by the evanescent field of an optical nanofiber. We also discuss algorithmic optimizations for implementing the split-step Fourier method on graphics processing units. All computational methods present in this work are demonstrated on physical systems and have been incorporated into a state-of-the-art and open-source software suite known as GPUE, which is currently the fastest quantum simulator of its kind.Okinawa Institute of Science and Technology Graduate Universit
Studies on machine learning-based aid for residency training and time difficulty in ophthalmology
兵庫県立大学大学院工学(博士)2023doctoral thesi
Computational Intelligence in Electromyography Analysis
Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG may be used clinically for the diagnosis of neuromuscular problems and for assessing biomechanical and motor control deficits and other functional disorders. Furthermore, it can be used as a control signal for interfacing with orthotic and/or prosthetic devices or other rehabilitation assists. This book presents an updated overview of signal processing applications and recent developments in EMG from a number of diverse aspects and various applications in clinical and experimental research. It will provide readers with a detailed introduction to EMG signal processing techniques and applications, while presenting several new results and explanation of existing algorithms. This book is organized into 18 chapters, covering the current theoretical and practical approaches of EMG research
- …