3,404 research outputs found
Personality Dysfunction Manifest in Words : Understanding Personality Pathology Using Computational Language Analysis
Personality disorders (PDs) are some of the most prevalent and high-risk mental health conditions, and yet remain poorly understood. Today, the development of new technologies means that there are advanced tools that can be used to improve our understanding and treatment of PD. One promising tool – indeed, the focus of this thesis – is computational language analysis. By looking at patterns in how people with personality pathology use words, it is possible to gain access into their constellation of thinking, feelings, and behaviours. To date, however, there has been little research at the intersection of verbal behaviour and personality pathology. Accordingly, the central goal of this thesis is to demonstrate how PD can be better understood through the analysis of natural language. This thesis presents three research articles, comprising four empirical studies, that each leverage computational language analysis to better understand personality pathology. Each paper focuses on a distinct core feature of PD, while incorporating language analysis methods: Paper 1 (Study 1) focuses on interpersonal dysfunction; Paper 2 (Studies 2 and 3) focuses on emotion dysregulation; and Paper 3 (Study 4) focuses on behavioural dysregulation (i.e., engagement in suicidality and deliberate self-harm). Findings from this research have generated better understanding of fundamental features of PD, including insight into characterising dimensions of social dysfunction (Paper 1), maladaptive emotion processes that may contribute to emotion dysregulation (Paper 2), and psychosocial dynamics relating to suicidality and deliberate self-harm (Paper 3) in PD. Such theoretical knowledge subsequently has important implications for clinical practice, particularly regarding the potential to inform psychological therapy. More broadly, this research highlights how language can provide implicit and unobtrusive insight into the personality and psychological processes that underlie personality pathology at a large-scale, using an individualised, naturalistic approach
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
A Holistic Analysis of Internet of Things (IoT) Security : Principles, Practices, and New Perspectives
Peer reviewedPublisher PD
Unleashing the power of artificial intelligence for climate action in industrial markets
Artificial Intelligence (AI) is a game-changing capability in industrial markets that can accelerate humanity's race against climate change. Positioned in a resource-hungry and pollution-intensive industry, this study explores AI-powered climate service innovation capabilities and their overall effects. The study develops and validates an AI model, identifying three primary dimensions and nine subdimensions. Based on a dataset in the fast fashion industry, the findings show that the AI-powered climate service innovation capabilities significantly influence both environmental and market performance, in which environmental performance acts as a partial mediator. Specifically, the results identify the key elements of an AI-informed framework for climate action and show how this can be used to develop a range of mitigation, adaptation and resilience initiatives in response to climate change
Security at the Edge for Resource-Limited IoT Devices
The Internet of Things (IoT) is rapidly growing, with an estimated 14.4 billion active endpoints in 2022 and a forecast of approximately 30 billion connected devices by 2027. This proliferation of IoT devices has come with significant security challenges, including intrinsic security vulnerabilities, limited computing power, and the absence of timely security updates. Attacks leveraging such shortcomings could lead to severe consequences, including data breaches and potential disruptions to critical infrastructures.
In response to these challenges, this research paper presents the IoT Proxy, a modular component designed to create a more resilient and secure IoT environment, especially in resource-limited scenarios.
The core idea behind the IoT Proxy is to externalize security-related aspects of IoT devices by channeling their traffic through a secure network gateway equipped with different Virtual Network Security Functions (VNSFs). Our solution includes a Virtual Private Network (VPN) terminator and an Intrusion Prevention System (IPS) that uses a machine learning-based technique called oblivious authentication to identify connected devices. The IoT Proxy’s modular, scalable, and externalized security approach creates a more resilient and secure IoT environment, especially for resource-limited IoT devices. The promising experimental results from laboratory testing demonstrate the suitability of IoT Proxy to secure real-world IoT ecosystems
Modern computing: Vision and challenges
Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
Privacy-preserving artificial intelligence in healthcare: Techniques and applications
There has been an increasing interest in translating artificial intelligence (AI) research into clinically-validated applications to improve the performance, capacity, and efficacy of healthcare services. Despite substantial research worldwide, very few AI-based applications have successfully made it to clinics. Key barriers to the widespread adoption of clinically validated AI applications include non-standardized medical records, limited availability of curated datasets, and stringent legal/ethical requirements to preserve patients' privacy. Therefore, there is a pressing need to improvise new data-sharing methods in the age of AI that preserve patient privacy while developing AI-based healthcare applications. In the literature, significant attention has been devoted to developing privacy-preserving techniques and overcoming the issues hampering AI adoption in an actual clinical environment. To this end, this study summarizes the state-of-the-art approaches for preserving privacy in AI-based healthcare applications. Prominent privacy-preserving techniques such as Federated Learning and Hybrid Techniques are elaborated along with potential privacy attacks, security challenges, and future directions. [Abstract copyright: Copyright © 2023 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Federated learning framework and energy disaggregation techniques for residential energy management
Residential energy use is a significant part of total power usage in developed countries. To reduce overall
energy use and save funds, these countries need solutions that help them keep track of how different
appliances are used at residences. Non-Intrusive Load Monitoring (NILM) or energy disaggregation
is a method for calculating individual appliance power consumption from a single meter tracking the
aggregated power of several appliances. To implement any NILM approach in the real world, it is
necessary to collect massive amounts of data from individual residences and transfer them to centralized
servers, where they will undergo extensive analysis. The centralized fashion of this procedure makes it
time-consuming and costly since transferring the data from thousands of residences to the central server
takes a lot of time and storage. This thesis proposes utilizing Federated Learning (FL) framework for
NILM in order to make the entire system cost-effective and efficient. Rather than collecting data from
all clients (residences) and sending it back to the central server, local models are generated on each
client’s end and trained on local data in FL. This allows FL to respond more quickly to changes in the
environment and handle data locally in a single household, increasing the system’s speed. On top of
that, without any data transfer, FL prevents data leakage and preserves the clients’ privacy, leading
to a safe and trustworthy system. For the first time, in this work, the performance of deploying FL
in NILM was investigated with two different energy disaggregation models: Short Sequence-to-Point
(Seq2Point) and Variational Auto-Encoder (VAE). Short Seq2Point with fewer samples as input window
for each appliance, tries to simulate the real-time energy disaggregation for the different appliances.
Despite having a light-weighted model, Short Seq2Point lacks generalizability and might confront some
challenges while disaggregating multi-state appliances
- …