9,500 research outputs found
Recommended from our members
Effects of Particle Swarm Optimisation on a Hybrid Load Balancing Approach for Resource Optimisation in Internet of Things
This article belongs to the Special Issue Emerging Machine Learning Techniques in Industrial Internet of ThingsCopyright Ā© 2023 by the authors. The internet of things, a collection of diversified distributed nodes, implies a varying choice of activities ranging from sleep monitoring and tracking of activities, to more complex activities such as data analytics and management. With an increase in scale comes even greater complexities, leading to significant challenges such as excess energy dissipation, which can lead to a decrease in IoT devicesā lifespan. Internet of thingsā (IoT) multiple variable activities and ample data management greatly influence devicesā lifespan, making resource optimisation a necessity. Existing methods with respect to aspects of resource management and optimisation are limited in their concern of devices energy dissipation. This paper therefore proposes a decentralised approach, which contains an amalgamation of efficient clustering techniques, edge computing paradigms, and a hybrid algorithm, targeted at curbing resource optimisation problems and life span issues associated with IoT devices. The decentralised topology aimed at the resource optimisation of IoT places equal importance on resource allocation and resource scheduling, as opposed to existing methods, by incorporating aspects of the static (round robin), dynamic (resource-based), and clustering (particle swarm optimisation) algorithms, to provide a solid foundation for an optimised and secure IoT. The simulation constructs five test-case scenarios and uses performance indicators to evaluate the effects the proposed model has on resource optimisation in IoT. The simulation results indicate the superiority of the PSOR2B to the ant colony, the current centralised optimisation approach, LEACH, and C-LBCA.This research received no external funding
Colour technologies for content production and distribution of broadcast content
The requirement of colour reproduction has long been a priority driving the development of new colour imaging systems that maximise human perceptual plausibility. This thesis explores machine learning algorithms for colour processing to assist both content production and distribution. First, this research studies colourisation technologies with practical use cases in restoration and processing of archived content. The research targets practical deployable solutions, developing a cost-effective pipeline which integrates the activity of the producer into the processing workflow. In particular, a fully automatic image colourisation paradigm using Conditional GANs is proposed to improve content generalisation and colourfulness of existing baselines. Moreover, a more conservative solution is considered by providing references to guide the system towards more accurate colour predictions. A fast-end-to-end architecture is proposed to improve existing exemplar-based image colourisation methods while decreasing the complexity and runtime. Finally, the proposed image-based methods are integrated into a video colourisation pipeline. A general framework is proposed to reduce the generation of temporal flickering or propagation of errors when such methods are applied frame-to-frame. The proposed model is jointly trained to stabilise the input video and to cluster their frames with the aim of learning scene-specific modes. Second, this research explored colour processing technologies for content distribution with the aim to effectively deliver the processed content to the broad audience. In particular, video compression is tackled by introducing a novel methodology for chroma intra prediction based on attention models. Although the proposed architecture helped to gain control over the reference samples and better understand the prediction process, the complexity of the underlying neural network significantly increased the encoding and decoding time. Therefore, aiming at efficient deployment within the latest video coding standards, this work also focused on the simplification of the proposed architecture to obtain a more compact and explainable model
An exploration of the language within Ofsted reports and their influence on primary school performance in mathematics: a mixed methods critical discourse analysis
This thesis contributes to the understanding of the language of Ofsted reports, their similarity to one another and associations between different terms used within āareas for improvementā sections and subsequent outcomes for pupils. The research responds to concerns from serving headteachers that Ofsted reports are overly similar, do not capture the unique story of their school, and are unhelpful for improvement. In seeking to answer āhow similar are
Ofsted reportsā the study uses two tools, a plagiarism detection software (Turnitin) and a discourse analysis tool (NVivo) to identify trends within and across a large corpus of reports.
The approach is based on critical discourse analysis (Van Dijk, 2009; Fairclough, 1989) but shaped in the form of practitioner enquiry seeking power in the form of impact on pupils and practitioners, rather than a more traditional, sociological application of the method.
The research found that in 2017, primary school section 5 Ofsted reports had more than half of their content exactly duplicated within other primary school inspection reports published that same year. Discourse analysis showed the quality assurance process overrode variables such as inspector designation, gender, or team size, leading to three distinct patterns of duplication: block duplication, self-referencing, and template writing. The most unique part of a report was found to be the āarea for improvementā section, which was tracked to externally verified outcomes for pupils using terms linked to āmathematicsā. Those
required to improve mathematics in their areas for improvement improved progress and attainment in mathematics significantly more than national rates. These findings indicate that there was a positive correlation between the inspection reporting process and a beneficial impact on pupil outcomes in mathematics, and that the significant similarity of one report to another had no bearing on the usefulness of the report for school improvement purposes
within this corpus
Composable code generation for high order, compatible finite element methods
It has been widely recognised in the HPC communities across the world, that exploiting modern
computer architectures, including exascale machines, to a full extent requires software commu-
nities to adapt their algorithms. Computational methods with a high ratio of floating point op-
erations to bandwidth are favorable. For solving partial differential equations, which can model
many physical problems, high order finite element methods can calculate approximations with a
high efficiency when a good solver is employed. Matrix-free algorithms solve the corresponding
equations with a high arithmetic intensity. Vectorisation speeds up the operations by calculating
one instruction on multiple data elements.
Another recent development for solving partial differential are compatible (mimetic) finite ele-
ment methods. In particular with application to geophysical flows, compatible discretisations ex-
hibit desired numerical properties required for accurate approximations. Among others, this has
been recognised by the UK Met office and their new dynamical core for weather and climate fore-
casting is built on a compatible discretisation. Hybridisation has been proven to be an efficient
solver for the corresponding equation systems, because it removes some inter-elemental coupling
and localises expensive operations.
This thesis combines the recent advances on vectorised, matrix-free, high order finite element
methods in the HPC community on the one hand and hybridised, compatible discretisations in
the geophysical community on the other. In previous work, a code generation framework has been
developed to support the localised linear algebra required for hybridisation. First, the framework
is adapted to support vectorisation and further, extended so that the equations can be solved fully
matrix-free. Promising performance results are completing the thesis.Open Acces
Measuring tropospheric water vapour and surface emissivity using far-infrared radiances
This thesis describes two strands of work relating to the gathering of far-infrared (wavenumbers 10-667 cmā1 wavelengths 15-1000Āµm) radiance measurements, exploring the capabilities of the instruments involved and examining the data generated in the context of the relevant models.
The first strand of the thesis explores the role of atmospheric water vapour which absorbs strongly in the far-infrared. The work described makes use of spectrally resolved far-infrared radiance measurements taken during the PIKNMIX-F airborne field campaign. On this field campaign co-incident upwelling mid- and far-infrared spectra were recorded in clear sky and overflying cirrus. The clear sky spectra from one flight are used to investigate the sensitivity of the far-infrared to the atmospheric water profile. Forward modelling is used to explore the changes to the expected radiance caused by changes in the surface properties and water vapour spectroscopy. Retrievals of water vapour and temperature profiles are also carried out. These show that the far-infrared radiance measurements contain more information about the atmospheric water vapour profile than the co-incident mid-infrared radiance measurements for this set of instruments.
The second strand of the thesis describes the new FINESSE instrument and its first measurements of far-infrared surface emissivity. FINESSE combines a commercial Fourier transform spectrometer with a spectral range of 400-1600 cmā1 and a custom pointing and calibration system. The main purpose of FINESSE is to make in-situ measurements of surface emissivity extending into the far-infrared. As part of the development of FINESSE, a simulator is produced that allows for the emissivity retrieval to be tested under different environmental conditions. This thesis describes the first measurements of emissivity made by FINESSE. The emissivity of deionised water is retrieved using radiance measurements made from the rooftop of Imperial College London during summer and winter conditions. The measurements compare favourably to theoretical calculations and previous mid-infrared measurements.Open Acces
Acoustic modelling, data augmentation and feature extraction for in-pipe machine learning applications
Gathering measurements from infrastructure, private premises, and harsh environments can be difficult and expensive. From this perspective, the development of
new machine learning algorithms is strongly affected by the availability of training
and test data. We focus on audio archives for in-pipe events. Although several
examples of pipe-related applications can be found in the literature, datasets of
audio/vibration recordings are much scarcer, and the only references found relate
to leakage detection and characterisation. Therefore, this work proposes a methodology to relieve the burden of data collection for acoustic events in deployed pipes.
The aim is to maximise the yield of small sets of real recordings and demonstrate
how to extract effective features for machine learning. The methodology developed
requires the preliminary creation of a soundbank of audio samples gathered with
simple weak annotations. For practical reasons, the case study is given by a range
of appliances, fittings, and fixtures connected to pipes in domestic environments.
The source recordings are low-reverberated audio signals enhanced through a
bespoke spectral filter and containing the desired audio fingerprints. The soundbank is then processed to create an arbitrary number of synthetic augmented
observations. The data augmentation improves the quality and the quantity of
the metadata and automatically creates strong and accurate annotations that
are both machine and human-readable. Besides, the implemented processing
chain allows precise control of properties such as signal-to-noise ratio, duration
of the events, and the number of overlapping events. The inter-class variability
is expanded by recombining source audio blocks and adding simulated artificial
reverberation obtained through an acoustic model developed for the purpose.
Finally, the dataset is synthesised to guarantee separability and balance. A few
signal representations are optimised to maximise the classification performance,
and the results are reported as a benchmark for future developments. The contribution to the existing knowledge concerns several aspects of the processing chain
implemented. A novel quasi-analytic acoustic model is introduced to simulate
in-pipe reverberations, adopting a three-layer architecture particularly convenient
for batch processing. The first layer includes two algorithms: one for the numerical
calculation of the axial wavenumbers and one for the separation of the modes. The
latter, in particular, provides a workaround for a problem not explicitly treated in the
literature and related to the modal non-orthogonality given by the solid-liquid interface in the analysed domain. A set of results for different waveguides is reported
to compare the dispersive behaviour against different mechanical configurations.
Two more novel solutions are also included in the second layer of the model and
concern the integration of the acoustic sources. Specifically, the amplitudes of the
non-orthogonal modal potentials are obtained using either a distance minimisation
objective function or by solving an analytical decoupling problem. In both cases,
results show that sources sufficiently smooth can be approximated with a limited
number of modes keeping the error below 1%. The last layer proposes a bespoke
approach for the integration of the acoustic model into the synthesiser as a reverberation simulator. Additional elements of novelty relate to the other blocks of the
audio synthesiser. The statistical spectral filter, for instance, is a batch-processing
solution for the attenuation of the background noise of the source recordings. The
signal-to-noise ratio analysis for both moderate and high noise levels indicates
a clear improvement of several decibels against the closest filter example in the
literature. The recombination of the audio blocks and the system of fully tracked
annotations are also novel extensions of similar approaches recently adopted in
other contexts. Moreover, a bespoke synthesis strategy is proposed to guarantee
separable and balanced datasets. The last contribution concerns the extraction
of convenient sets of audio features. Elements of novelty are introduced for the
optimisation of the filter banks of the mel-frequency cepstral coefficients and the
scattering wavelet transform. In particular, compared to the respective standard
definitions, the average F-score performance of the optimised features is roughly
6% higher in the first case and 2.5% higher for the latter. Finally, the soundbank,
the synthetic dataset, and the fundamental blocks of the software library developed
are publicly available for further research
Randomized Byzantine Gathering in Rings
We study the problem of gathering k anonymous mobile agents on a ring with n nodes. Importantly, f out of the k anonymous agents are Byzantine. The agents operate synchronously and in an autonomous fashion. In each round, each agent can communicate with other agents co-located with it by broadcasting a message. After receiving all the messages, each agent decides to either move to a neighbouring node or stay put. We begin with the k agents placed arbitrarily on the ring, and the task is to gather all the good agents in a single node. The task is made harder by the presence of Byzantine agents, which are controlled by a single Byzantine adversary. Byzantine agents can deviate arbitrarily from the protocol. The Byzantine adversary is computationally unbounded. Additionally, the Byzantine adversary is adaptive in the sense that it can capitalize on information gained over time (including the current round) to choreograph the actions of Byzantine agents. Specifically, the entire state of the system, which includes messages sent by all the agents and any random bits generated by the agents, is known to the Byzantine adversary before all the agents move. Thus the Byzantine adversary can compute the positioning of good agents across the ring and choreograph the movement of Byzantine agents accordingly. Moreover, we consider two settings: standard and visual tracking setting. With visual tracking, agents have the ability to track other agents that are moving along with them. In the standard setting, agents do not have such an ability.
In the standard setting we can achieve gathering in ?(nlog nlog k) rounds with high probability and can handle ?(k/(log k)) number of Byzantine agents. With visual tracking, we can achieve gathering faster in ?(n log n) rounds whp and can handle any constant fraction of the total number of agents being Byzantine
Energy-Aware, Collision-Free Information Gathering for Heterogeneous Robot Teams
This paper considers the problem of safely coordinating a team of
sensor-equipped robots to reduce uncertainty about a dynamical process, where
the objective trades off information gain and energy cost. Optimizing this
trade-off is desirable, but leads to a non-monotone objective function in the
set of robot trajectories. Therefore, common multi-robot planners based on
coordinate descent lose their performance guarantees. Furthermore, methods that
handle non-monotonicity lose their performance guarantees when subject to
inter-robot collision avoidance constraints. As it is desirable to retain both
the performance guarantee and safety guarantee, this work proposes a
hierarchical approach with a distributed planner that uses local search with a
worst-case performance guarantees and a decentralized controller based on
control barrier functions that ensures safety and encourages timely arrival at
sensing locations. Via extensive simulations, hardware-in-the-loop tests and
hardware experiments, we demonstrate that the proposed approach achieves a
better trade-off between sensing and energy cost than coordinate-descent-based
algorithms.Comment: To appear in Transactions on Robotics; 18 pages and 16 figures. arXiv
admin note: text overlap with arXiv:2101.1109
Geometric Graph Filters and Neural Networks: Limit Properties and Discriminability Trade-offs
This paper studies the relationship between a graph neural network (GNN) and
a manifold neural network (MNN) when the graph is constructed from a set of
points sampled from the manifold, thus encoding geometric information. We
consider convolutional MNNs and GNNs where the manifold and the graph
convolutions are respectively defined in terms of the Laplace-Beltrami operator
and the graph Laplacian. Using the appropriate kernels, we analyze both dense
and moderately sparse graphs. We prove non-asymptotic error bounds showing that
convolutional filters and neural networks on these graphs converge to
convolutional filters and neural networks on the continuous manifold. As a
byproduct of this analysis, we observe an important trade-off between the
discriminability of graph filters and their ability to approximate the desired
behavior of manifold filters. We then discuss how this trade-off is ameliorated
in neural networks due to the frequency mixing property of nonlinearities. We
further derive a transferability corollary for geometric graphs sampled from
the same manifold. We validate our results numerically on a navigation control
problem and a point cloud classification task.Comment: 16 pages, 6 figures, 3 table
- ā¦