5,488 research outputs found
Radio frequency fingerprint identification for Internet of Things: A survey
Radio frequency fingerprint (RFF) identification is a promising technique for identifying Internet of Things (IoT) devices. This paper presents a comprehensive survey on RFF identification, which covers various aspects ranging from related definitions to details of each stage in the identification process, namely signal preprocessing, RFF feature extraction, further processing, and RFF identification. Specifically, three main steps of preprocessing are summarized, including carrier frequency offset estimation, noise elimination, and channel cancellation. Besides, three kinds of RFFs are categorized, comprising I/Q signal-based, parameter-based, and transformation-based features. Meanwhile, feature fusion and feature dimension reduction are elaborated as two main further processing methods. Furthermore, a novel framework is established from the perspective of closed set and open set problems, and the related state-of-the-art methodologies are investigated, including approaches based on traditional machine learning, deep learning, and generative models. Additionally, we highlight the challenges faced by RFF identification and point out future research trends in this field
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
Securing NextG networks with physical-layer key generation: A survey
As the development of next-generation (NextG) communication networks continues, tremendous devices are accessing the network and the amount of information is exploding. However, with the increase of sensitive data that requires confidentiality to be transmitted and stored in the network, wireless network security risks are further amplified. Physical-layer key generation (PKG) has received extensive attention in security research due to its solid information-theoretic security proof, ease of implementation, and low cost. Nevertheless, the applications of PKG in the NextG networks are still in the preliminary exploration stage. Therefore, we survey existing research and discuss (1) the performance advantages of PKG compared to cryptography schemes, (2) the principles and processes of PKG, as well as research progresses in previous network environments, and (3) new application scenarios and development potential for PKG in NextG communication networks, particularly analyzing the effect and prospects of PKG in massive multiple-input multiple-output (MIMO), reconfigurable intelligent surfaces (RISs), artificial intelligence (AI) enabled networks, integrated space-air-ground network, and quantum communication. Moreover, we summarize open issues and provide new insights into the development trends of PKG in NextG networks
How do patients and providers navigate the âcorruption complexâ in mixed health systems? The case of Abuja, Nigeria.
INTRODUCTION:
Over the last decades, scholars have sought to investigate the causes, manifestations, and impacts of corruption in healthcare. Most of this scholarship has focused on corruption as it occurs in public health facilities. However, in Nigeria, in which most residents attend private health facilities for at least some of their care needs, this focus is incomplete. In such contexts, it is important to understand corruption as it occurs across both public and private settings, and in the interactions between them. This study seeks to address this gap. It aims to examine how corruption is experienced by, and impacts upon, patients and providers as they navigate the âcorruption complexâ in the mixed health system of Abuja, Nigeria.
OBJECTIVES:
This over-arching aim is addressed via three interrelated objectives, as follows:
1.To investigate the experiences of patients and providers concerning the causes, manifestations, and impacts of corruption in public health facilities, in Abuja, Nigeria.
2.To investigate patients / provider experiences of corruption as they relate to private health facilities in Abuja, Nigeria.
3.To investigate how, and the extent to which, corruption is enabled by the co-existence of and interactions between public and private health facilities in the context of the mixed health system of Nigeria â and of Abuja in particular.
METHODS:
All three objectives are addressed via a qualitative exploratory study. Data was collected in Abuja, Nigeriaâs Federal Capital Territory (between October 2021 to May 2022) through: (i) in-depth interviews with 53 key informants, representing a range of patient and provider types, and policymakers; and (ii) participant observation over eight months of fieldwork. The research took place in three secondary-level public health facilities (Gwarinpa, Kubwa, and Wuse General hospital) and three equivalent-sized private health facilities (Nissa, Garki, and King's Care Hospital) in Abuja. The empirical data was analysed using Braun and Clarke's (2006) reflexive thematic analysis approach and presented in a narrative form. Abuja was selected as the research setting, as the city is representative of the mixed health system structures that exist in Nigeria, especially in the countryâs larger urban areas.
RESULTS:
Objective 1: Corruption in public health facilities is driven by a shortage of resources, low salaries, commercialisation of health and relationships between patients and providers, and weak accountability structures. Corruption takes various forms which include: bribery, informal payments, theft, influence- activities associated with nepotism, and pressure from informal rules. Impacts include erosion of the right to health care and patient dignity, alongside increased barriers to access, including financial barriers, especially for poorer patients.
Objective 2: Corruption in private health facilities is driven by incentives aimed at profit maximisation, poor regulation, and lack of oversight. Corruption takes various forms which include: inappropriate or unnecessary prescriptions (often driven by the potential for kickbacks), forging of medical reports, over-invoicing, and other related types of fraud, and under/over-treatment of patients. Impacts include reductions to the quality of care provided and exacerbation of financial risks to patients.
Objective 3: The nature of public-private sector interactions creates scope for several forms of corruption. For example, these interactions contribute to the causes of corruption in the public sector - especially the problem of scarcity of resources. Related manifestations include dual practice, absenteeism, and theft (e.g., diversion of patients, medical supplies, and equipment from public to private facilities). The impacts of such practices include inequities of access, for example, due to delays in and denials of needed services and additional financial barriers encountered in public facilities, alongside reductions to quality of care, pricing transparency and financial protection in private facilities.
CONCLUSION:
Patients experience corruption in both public and private health facilities in Abuja, Nigeria. The causes, manifestations and impacts of corruption differ across these settings. In the public sector, corruption creates financial and non-financial barriers to care â aggravating inequities of access. In the private health sector, corruption undermines quality of care and exacerbates financial risks. The public-private mix is itself implicated in the problem â giving rise to new opportunities for corruption, to the detriment of patientsâ health and welfare. For policymakers in Nigeria to address the problem of corruption, a cross-sectoral approach - inclusive of the full range of providers within the mixed health system â will be required
Innermost Echoes: Integrating Real-Time Physiology into Live Music Performances
In this paper, we propose a method for utilizing musical artifacts and physiological data as a means for creating a new form of live music experience that is rooted in the physiology of the perform- ers and audience members. By utilizing physiological data (namely Electrodermal Activity (EDA) and Heart Rate Variability (HRV)) and applying this data to musical artifacts including a robotic koto (a traditional 13-string Japanese instrument fitted with solenoids and linear actuators), a Eurorack synthesizer, and Max/MSP software, we aim to develop a new form of semi-improvisational and signif- icantly indeterminate performance practice. It has since evolved into a multi-modal methodology which honors improvisational performance practices and utilizes physiological data which of- fers both performers and audiences an ever-changing and intimate experience.
In our first exploratory phase, we focused on the development of a means for controlling a bespoke robotic koto in conjunction with a Eurorack synthesizer system and Max/MSP software for controlling the incoming data. We integrated a reliance on physiological data to infuse a more directly human elements into this artifact system. This allows a significant portion of the decision-making to be directly controlled by the incoming physiological data in real-time, thereby affording a sense of performativity within this non-living system. Our aim is to continue the development of this method to strike a novel balance between intentionality and impromptu performative results
A review of differentiable digital signal processing for music and speech synthesis
The term âdifferentiable digital signal processingâ describes a family of techniques in which loss function gradients are backpropagated through digital signal processors, facilitating their integration into neural networks. This article surveys the literature on differentiable audio signal processing, focusing on its use in music and speech synthesis. We catalogue applications to tasks including music performance rendering, sound matching, and voice transformation, discussing the motivations for and implications of the use of this methodology. This is accompanied by an overview of digital signal processing operations that have been implemented differentiably, which is further supported by a web book containing practical advice on differentiable synthesiser programming (https://intro2ddsp.github.io/). Finally, we highlight open challenges, including optimisation pathologies, robustness to real-world conditions, and design trade-offs, and discuss directions for future research
Online semi-supervised learning in non-stationary environments
Existing Data Stream Mining (DSM) algorithms assume the availability of labelled and
balanced data, immediately or after some delay, to extract worthwhile knowledge from the
continuous and rapid data streams. However, in many real-world applications such as
Robotics, Weather Monitoring, Fraud Detection Systems, Cyber Security, and Computer
Network Traffic Flow, an enormous amount of high-speed data is generated by Internet of
Things sensors and real-time data on the Internet. Manual labelling of these data streams
is not practical due to time consumption and the need for domain expertise. Another
challenge is learning under Non-Stationary Environments (NSEs), which occurs due to
changes in the data distributions in a set of input variables and/or class labels. The problem
of Extreme Verification Latency (EVL) under NSEs is referred to as Initially Labelled Non-Stationary Environment (ILNSE). This is a challenging task because the learning algorithms
have no access to the true class labels directly when the concept evolves. Several approaches
exist that deal with NSE and EVL in isolation. However, few algorithms address both issues
simultaneously. This research directly responds to ILNSEâs challenge in proposing two
novel algorithms âPredictor for Streaming Data with Scarce Labelsâ (PSDSL) and
Heterogeneous Dynamic Weighted Majority (HDWM) classifier. PSDSL is an Online Semi-Supervised Learning (OSSL) method for real-time DSM and is closely related to label
scarcity issues in online machine learning.
The key capabilities of PSDSL include learning from a small amount of labelled data in an
incremental or online manner and being available to predict at any time. To achieve this,
PSDSL utilises both labelled and unlabelled data to train the prediction models, meaning it
continuously learns from incoming data and updates the model as new labelled or
unlabelled data becomes available over time. Furthermore, it can predict under NSE
conditions under the scarcity of class labels. PSDSL is built on top of the HDWM classifier,
which preserves the diversity of the classifiers. PSDSL and HDWM can intelligently switch
and adapt to the conditions. The PSDSL adapts to learning states between self-learning,
micro-clustering and CGC, whichever approach is beneficial, based on the characteristics of
the data stream. HDWM makes use of âseedâ learners of different types in an ensemble to
maintain its diversity. The ensembles are simply the combination of predictive models
grouped to improve the predictive performance of a single classifier.
PSDSL is empirically evaluated against COMPOSE, LEVELIW, SCARGC and MClassification
on benchmarks, NSE datasets as well as Massive Online Analysis (MOA) data streams and real-world datasets. The results showed that PSDSL performed significantly better than
existing approaches on most real-time data streams including randomised data instances.
PSDSL performed significantly better than âStaticâ i.e. the classifier is not updated after it is
trained with the first examples in the data streams. When applied to MOA-generated data
streams, PSDSL ranked highest (1.5) and thus performed significantly better than SCARGC,
while SCARGC performed the same as the Static. PSDSL achieved better average prediction
accuracies in a short time than SCARGC.
The HDWM algorithm is evaluated on artificial and real-world data streams against existing
well-known approaches such as the heterogeneous WMA and the homogeneous Dynamic
DWM algorithm. The results showed that HDWM performed significantly better than WMA
and DWM. Also, when recurring concept drifts were present, the predictive performance of
HDWM showed an improvement over DWM. In both drift and real-world streams,
significance tests and post hoc comparisons found significant differences between
algorithms, HDWM performed significantly better than DWM and WMA when applied to
MOA data streams and 4 real-world datasets Electric, Spam, Sensor and Forest cover. The
seeding mechanism and dynamic inclusion of new base learners in the HDWM algorithms
benefit from the use of both forgetting and retaining the models. The algorithm also
provides the independence of selecting the optimal base classifier in its ensemble depending
on the problem.
A new approach, Envelope-Clustering is introduced to resolve the cluster overlap conflicts
during the cluster labelling process. In this process, PSDSL transforms the centroidsâ
information of micro-clusters into micro-instances and generates new clusters called
Envelopes. The nearest envelope clusters assist the conflicted micro-clusters and
successfully guide the cluster labelling process after the concept drifts in the absence of true
class labels. PSDSL has been evaluated on real-world problem âkeystroke dynamicsâ, and
the results show that PSDSL achieved higher prediction accuracy (85.3%) and SCARGC
(81.6%), while the Static (49.0%) significantly degrades the performance due to changes in
the users typing pattern. Furthermore, the predictive accuracies of SCARGC are found
highly fluctuated between (41.1% to 81.6%) based on different values of parameter âkâ
(number of clusters), while PSDSL automatically determine the best values for this
parameter
Proceedings of the 10th International congress on architectural technology (ICAT 2024): architectural technology transformation.
The profession of architectural technology is influential in the transformation of the built environment regionally, nationally, and internationally. The congress provides a platform for industry, educators, researchers, and the next generation of built environment students and professionals to showcase where their influence is transforming the built environment through novel ideas, businesses, leadership, innovation, digital transformation, research and development, and sustainable forward-thinking technological and construction assembly design
Enhancing the forensic comparison process of common trace materials through the development of practical and systematic methods
An ongoing advancement in forensic trace evidence has driven the development of new and objective methods for comparing various materials. While many standard guides have been published for use in trace laboratories, different areas require a more comprehensive understanding of error rates and an urgent need for harmonizing methods of examination and interpretation. Two critical areas are the forensic examination of physical fits and the comparison of spectral data, which depend highly on the examinerâs judgment.
The long-term goal of this study is to advance and modernize the comparative process of physical fit examinations and spectral interpretation. This goal is fulfilled through several avenues: 1) improvement of quantitative-based methods for various trace materials, 2) scrutiny of the methods through interlaboratory exercises, and 3) addressing fundamental aspects of the discipline using large experimental datasets, computational algorithms, and statistical analysis.
A substantial new body of knowledge has been established by analyzing population sets of nearly 4,000 items representative of casework evidence. First, this research identifies material-specific relevant features for duct tapes and automotive polymers. Then, this study develops reporting templates to facilitate thorough and systematic documentation of an analystâs decision-making process and minimize risks of bias. It also establishes criteria for utilizing a quantitative edge similarity score (ESS) for tapes and automotive polymers that yield relatively high accuracy (85% to 100%) and, notably, no false positives. Finally, the practicality and performance of the ESS method for duct tape physical fits are evaluated by forensic practitioners through two interlaboratory exercises. Across these studies, accuracy using the ESS method ranges between 95-99%, and again no false positives are reported. The practitionersâ feedback demonstrates the methodâs potential to assist in training and improve peer verifications.
This research also develops and trains computational algorithms to support analysts making decisions on sample comparisons. The automated algorithms in this research show the potential to provide objective and probabilistic support for determining a physical fit and demonstrate comparative accuracy to the analyst. Furthermore, additional models are developed to extract feature edge information from the systematic comparison templates of tapes and textiles to provide insight into the relative importance of each comparison feature. A decision tree model is developed to assist physical fit examinations of duct tapes and textiles and demonstrates comparative performance to the trained analysts. The computational tools also evaluate the suitability of partial sample comparisons that simulate situations where portions of the item are lost or damaged.
Finally, an objective approach to interpreting complex spectral data is presented. A comparison metric consisting of spectral angle contrast ratios (SCAR) is used as a model to assess more than 94 different-source and 20 same-source electrical tape backings. The SCAR metric results in a discrimination power of 96% and demonstrates the capacity to capture information on the variability between different-source samples and the variability within same-source samples. Application of the random-forest model allows for the automatic detection of primary differences between samples. The developed threshold could assist analysts with making decisions on the spectral comparison of chemically similar samples.
This research provides the forensic science community with novel approaches to comparing materials commonly seen in forensic laboratories. The outcomes of this study are anticipated to offer forensic practitioners new and accessible tools for incorporation into current workflows to facilitate systematic and objective analysis and interpretation of forensic materials and support analystsâ opinions
- âŠ