1,453 research outputs found

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    DEEP LEARNING APPROACH FOR EFFICIENT ENERGY CONSUMPTION AND HIGH THROUGHPUT IN MOBILE WIRELESS SENSOR NETWORKS

    Get PDF
    Features such as scalability, smaller size, simplicity, low-cost operation, self organization abilities, and easy and fast deployment are the main parameters of a Wireless Sensor Network (WSN). The research demand is growing on WSNs, and therefore, areas under agriculture, industry, healthcare, manufacturing, security, surveillance, transport, air quality, water quality, etc., have started to possess the attributes of WSNs. The primary goal of SNs is to collect the data from the area of interest and communicate it to the sink or base station (BS) for further processing via single or multi-hop transmission. Sometimes, the BS acts as a gateway to the Internet of Things (IoT), where the IoT can communicate the data to the Cloud using the Internet. The battery-equipped SNs consume more energy for heavy data transmission. Transmission of high-quality data in SN makes the battery-equipped micro-sensors consume much energy. Mobile Wireless Sensor Network (MWSN) represents a fast-evolving technology, and its use in many things is not limited. While fixed-infrastructure networks constrain sensor nodes to one specific location, MWSNs allow the partial nodes or all nodes to move wherever they want and communicate between themselves, making the whole system more flexible. Furthermore, MWSNs can be compared with respect to GPS, Bluetooth Low Energy (BLE), and existing wireless sensor networks in aspects of extended network lifespan, energy saving, multiband functionality, and high targeting. Nevertheless, pathfinding in MWSNs is very challenging since the sensor nodes are mobile, low-cost devices that are time constrained, allowing limited resources to be used. On the mobile network, this unique frequency scheme creates extra difficulty in routing. In most monitoring applications, only partial nodes need to be moved in the network. Such nodes are called mobile agent sink nodes or sensor nodes. In the present work, the movement of only a few nodes is considered in MWSN. Energy consumption and network connectivity are two major issues in MWSNs. Several studies have been conducted to develop and propose suitable solutions for these problems. Many researchers are working to develop the best solutions due to the severe problems with energy consumption and network connectivity in mobile wireless sensor networks. To investigate network connectivity, this study introduces a new efficient technique that considers parameters like network stability, detection area, low energy consumption, etc. This approach guarantees network connectivity, communication sustainability, and the highest level of energy consumption optimization. This research investigates network connectivity issue and proposes two routing algorithms, namely Self-Organizing Maps based-Optimized Link State Routing (SOM-OSLR) and Deep Reinforcement Learning based-Optimized Link State Routing (DRL-OLSR) for MWSNs. Both algorithms undertake the relationship between sensor node deployment, communication radius, and detection area and suggest a new way to maintain communication while optimizing energy usage. I have evaluated both algorithms through simulations by considering various performance metrics such as connection probability, end-to-end delay, overhead, network throughput, and energy consumption. The network is analyzed for proposed routing and aggregation methods to analyze the performance. The simulation analysis is discussed under three scenarios. The first scenario undertakes ’no optimization,’ the second considers SOM-OLSR, and the third undertakes DRL-OLSR. The simulation results indicate that the SOM-OLSR performs better compared to the case with ’no routing’ optimization. Comparing DRL-OLSR and SOM-OLSR indicates that the former outperforms the latter in terms of low latency and high network lifetime. Specifically, the DRL-OLSR achieves a 47% higher throughput and 67% lower energy consumption compared to the SOM-OLSR. In addition, when compared to the ’No optimization’ condition, the DRL-OLSR achieves a notable 69.7% higher throughput and almost 89% lower energy consumption. These findings highlight the effectiveness of the DRL-OLSR approach in optimizing network performance and energy efficiency in wireless sensor networks. Similarly, data aggregation consistently reduces energy consumption across all scenarios, with up to 50% lower as compared to without data aggregation

    IoT-Based Environmental Control System for Fish Farms with Sensor Integration and Machine Learning Decision Support

    Get PDF
    In response to the burgeoning global demand for seafood and the challenges of managing fish farms, we introduce an innovative IoT-based environmental control system that integrates sensor technology and advanced machine learning decision support. Deploying a network of wireless sensors within the fish farm, we continuously collect real-time data on crucial environmental parameters, including water temperature, pH levels, humidity, and fish behavior. This data undergoes meticulous preprocessing to ensure its reliability, including imputation, outlier detection, feature engineering, and synchronization. At the heart of our system are four distinct machine learning algorithms: Random Forests predict and optimize water temperature and pH levels for the fish, fostering their health and growth; Support Vector Machines (SVMs) function as an early warning system, promptly detecting diseases and parasites in fish; Gradient Boosting Machines (GBMs) dynamically fine-tune the feeding schedule based on real-time environmental conditions, promoting resource efficiency and fish productivity; Neural Networks manage the operation of critical equipment like water pumps and heaters to maintain the desired environmental conditions within the farm. These machine learning algorithms collaboratively make real-time decisions to ensure that the fish farm's environmental conditions align with predefined specifications, leading to improved fish health and productivity while simultaneously reducing resource wastage, thereby contributing to increased profitability and sustainability. This research article showcases the power of data-driven decision support in fish farming, promising to meet the growing demand for seafood while emphasizing environmental responsibility and economic viability, thus revolutionizing the future of fish farming

    Quadri-dimensional approach for data analytics in mobile networks

    Get PDF
    The telecommunication market is growing at a very fast pace with the evolution of new technologies to support high speed throughput and the availability of a wide range of services and applications in the mobile networks. This has led to a need for communication service providers (CSPs) to shift their focus from network elements monitoring towards services monitoring and subscribers’ satisfaction by introducing the service quality management (SQM) and the customer experience management (CEM) that require fast responses to reduce the time to find and solve network problems, to ensure efficiency and proactive maintenance, to improve the quality of service (QoS) and the quality of experience (QoE) of the subscribers. While both the SQM and the CEM demand multiple information from different interfaces, managing multiple data sources adds an extra layer of complexity with the collection of data. While several studies and researches have been conducted for data analytics in mobile networks, most of them did not consider analytics based on the four dimensions involved in the mobile networks environment which are the subscriber, the handset, the service and the network element with multiple interface correlation. The main objective of this research was to develop mobile network analytics models applied to the 3G packet-switched domain by analysing data from the radio network with the Iub interface and the core network with the Gn interface to provide a fast root cause analysis (RCA) approach considering the four dimensions involved in the mobile networks. This was achieved by using the latest computer engineering advancements which are Big Data platforms and data mining techniques through machine learning algorithms.Electrical and Mining EngineeringM. Tech. (Electrical Engineering

    Towards Autonomous Computer Networks in Support of Critical Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Performance Analysis Of Data-Driven Algorithms In Detecting Intrusions On Smart Grid

    Get PDF
    The traditional power grid is no longer a practical solution for power delivery due to several shortcomings, including chronic blackouts, energy storage issues, high cost of assets, and high carbon emissions. Therefore, there is a serious need for better, cheaper, and cleaner power grid technology that addresses the limitations of traditional power grids. A smart grid is a holistic solution to these issues that consists of a variety of operations and energy measures. This technology can deliver energy to end-users through a two-way flow of communication. It is expected to generate reliable, efficient, and clean power by integrating multiple technologies. It promises reliability, improved functionality, and economical means of power transmission and distribution. This technology also decreases greenhouse emissions by transferring clean, affordable, and efficient energy to users. Smart grid provides several benefits, such as increasing grid resilience, self-healing, and improving system performance. Despite these benefits, this network has been the target of a number of cyber-attacks that violate the availability, integrity, confidentiality, and accountability of the network. For instance, in 2021, a cyber-attack targeted a U.S. power system that shut down the power grid, leaving approximately 100,000 people without power. Another threat on U.S. Smart Grids happened in March 2018 which targeted multiple nuclear power plants and water equipment. These instances represent the obvious reasons why a high level of security approaches is needed in Smart Grids to detect and mitigate sophisticated cyber-attacks. For this purpose, the US National Electric Sector Cybersecurity Organization and the Department of Energy have joined their efforts with other federal agencies, including the Cybersecurity for Energy Delivery Systems and the Federal Energy Regulatory Commission, to investigate the security risks of smart grid networks. Their investigation shows that smart grid requires reliable solutions to defend and prevent cyber-attacks and vulnerability issues. This investigation also shows that with the emerging technologies, including 5G and 6G, smart grid may become more vulnerable to multistage cyber-attacks. A number of studies have been done to identify, detect, and investigate the vulnerabilities of smart grid networks. However, the existing techniques have fundamental limitations, such as low detection rates, high rates of false positives, high rates of misdetection, data poisoning, data quality and processing, lack of scalability, and issues regarding handling huge volumes of data. Therefore, these techniques cannot ensure safe, efficient, and dependable communication for smart grid networks. Therefore, the goal of this dissertation is to investigate the efficiency of machine learning in detecting cyber-attacks on smart grids. The proposed methods are based on supervised, unsupervised machine and deep learning, reinforcement learning, and online learning models. These models have to be trained, tested, and validated, using a reliable dataset. In this dissertation, CICDDoS 2019 was used to train, test, and validate the efficiency of the proposed models. The results show that, for supervised machine learning models, the ensemble models outperform other traditional models. Among the deep learning models, densely neural network family provides satisfactory results for detecting and classifying intrusions on smart grid. Among unsupervised models, variational auto-encoder, provides the highest performance compared to the other unsupervised models. In reinforcement learning, the proposed Capsule Q-learning provides higher detection and lower misdetection rates, compared to the other model in literature. In online learning, the Online Sequential Euclidean Distance Routing Capsule Network model provides significantly better results in detecting intrusion attacks on smart grid, compared to the other deep online models
    • …
    corecore