4,889 research outputs found
Deep generative models for network data synthesis and monitoring
Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network.
Although networks inherently
have abundant amounts of monitoring data, its access and effective measurement is
another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset
without leaking commercial sensitive information. Second, it could be very expensive
to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of
flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources
in the network element that can be applied to support the measurement function are
too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex
structure. Various emerging optimization-based solutions (e.g., compressive sensing)
or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet
meet the current network requirements.
The contributions made in this thesis significantly advance the state of the art in
the domain of network measurement and monitoring techniques. Overall, we leverage
cutting-edge machine learning technology, deep generative modeling, throughout the
entire thesis. First, we design and realize APPSHOT , an efficient city-scale network
traffic sharing with a conditional generative model, which only requires open-source
contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system â GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we
design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time
network telemetry system with latent GANs and spectral-temporal networks. Finally,
we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through
this research are summarized, and interesting topics are discussed for future work in
this domain. All proposed solutions have been evaluated with real-world datasets and
applied to support different applications in real systems
Design and Evaluation of a Hardware System for Online Signal Processing within Mobile Brain-Computer Interfaces
Brain-Computer Interfaces (BCIs) sind innovative Systeme, die eine direkte Kommunikation zwischen dem Gehirn und externen GerĂ€ten ermöglichen. Diese Schnittstellen haben sich zu einer transformativen Lösung nicht nur fĂŒr Menschen mit neurologischen Verletzungen entwickelt, sondern auch fĂŒr ein breiteres Spektrum von Menschen, das sowohl medizinische als auch nicht-medizinische Anwendungen umfasst. In der Vergangenheit hat die Herausforderung, dass neurologische Verletzungen nach einer anfĂ€nglichen Erholungsphase statisch bleiben, die Forscher dazu veranlasst, innovative Wege zu beschreiten. Seit den 1970er Jahren stehen BCIs an vorderster Front dieser BemĂŒhungen. Mit den Fortschritten in der Forschung haben sich die BCI-Anwendungen erweitert und zeigen ein groĂes Potenzial fĂŒr eine Vielzahl von Anwendungen, auch fĂŒr weniger stark eingeschrĂ€nkte (zum Beispiel im Kontext von Hörelektronik) sowie völlig gesunde Menschen (zum Beispiel in der Unterhaltungsindustrie). Die Zukunft der BCI-Forschung hĂ€ngt jedoch auch von der VerfĂŒgbarkeit zuverlĂ€ssiger BCI-Hardware ab, die den Einsatz in der realen Welt gewĂ€hrleistet.
Das im Rahmen dieser Arbeit konzipierte und implementierte CereBridge-System stellt einen bedeutenden Fortschritt in der Brain-Computer-Interface-Technologie dar, da es die gesamte Hardware zur Erfassung und Verarbeitung von EEG-Signalen in ein mobiles System integriert. Die Architektur der Verarbeitungshardware basiert auf einem FPGA mit einem ARM Cortex-M3 innerhalb eines heterogenen ICs, was FlexibilitĂ€t und Effizienz bei der EEG-Signalverarbeitung gewĂ€hrleistet. Der modulare Aufbau des Systems, bestehend aus drei einzelnen Boards, gewĂ€hrleistet die Anpassbarkeit an unterschiedliche Anforderungen. Das komplette System wird an der Kopfhaut befestigt, kann autonom arbeiten, benötigt keine externe Interaktion und wiegt einschlieĂlich der 16-Kanal-EEG-Sensoren nur ca. 56 g. Der Fokus liegt auf voller MobilitĂ€t.
Das vorgeschlagene anpassbare Datenflusskonzept erleichtert die Untersuchung und nahtlose Integration von Algorithmen und erhöht die FlexibilitĂ€t des Systems. Dies wird auch durch die Möglichkeit unterstrichen, verschiedene Algorithmen auf EEG-Daten anzuwenden, um unterschiedliche Anwendungsziele zu erreichen. High-Level Synthesis (HLS) wurde verwendet, um die Algorithmen auf das FPGA zu portieren, was den Algorithmenentwicklungsprozess beschleunigt und eine schnelle Implementierung von Algorithmusvarianten ermöglicht. Evaluierungen haben gezeigt, dass das CereBridge-System in der Lage ist, die gesamte Signalverarbeitungskette zu integrieren, die fĂŒr verschiedene BCI-Anwendungen erforderlich ist. DarĂŒber hinaus kann es mit einer Batterie von mehr als 31 Stunden Dauerbetrieb betrieben werden, was es zu einer praktikablen Lösung fĂŒr mobile Langzeit-EEG-Aufzeichnungen und reale BCI-Studien macht.
Im Vergleich zu bestehenden Forschungsplattformen bietet das CereBridge-System eine bisher unerreichte LeistungsfĂ€higkeit und Ausstattung fĂŒr ein mobiles BCI. Es erfĂŒllt nicht nur die relevanten Anforderungen an ein mobiles BCI-System, sondern ebnet auch den Weg fĂŒr eine schnelle Ăbertragung von Algorithmen aus dem Labor in reale Anwendungen. Im Wesentlichen liefert diese Arbeit einen umfassenden Entwurf fĂŒr die Entwicklung und Implementierung eines hochmodernen mobilen EEG-basierten BCI-Systems und setzt damit einen neuen Standard fĂŒr BCI-Hardware, die in der Praxis eingesetzt werden kann.Brain-Computer Interfaces (BCIs) are innovative systems that enable direct communication between the brain and external devices. These interfaces have emerged as a transformative solution not only for individuals with neurological injuries, but also for a broader range of individuals, encompassing both medical and non-medical applications. Historically, the challenge of neurological injury being static after an initial recovery phase has driven researchers to explore innovative avenues. Since the 1970s, BCIs have been at one forefront of these efforts. As research has progressed, BCI applications have expanded, showing potential in a wide range of applications, including those for less severely disabled (e.g. in the context of hearing aids) and completely healthy individuals (e.g. entertainment industry). However, the future of BCI research also depends on the availability of reliable BCI hardware to ensure real-world application.
The CereBridge system designed and implemented in this work represents a significant leap forward in brain-computer interface technology by integrating all EEG signal acquisition and processing hardware into a mobile system. The processing hardware architecture is centered around an FPGA with an ARM Cortex-M3 within a heterogeneous IC, ensuring flexibility and efficiency in EEG signal processing. The modular design of the system, consisting of three individual boards, ensures adaptability to different requirements. With a focus on full mobility, the complete system is mounted on the scalp, can operate autonomously, requires no external interaction, and weighs approximately 56g, including 16 channel EEG sensors.
The proposed customizable dataflow concept facilitates the exploration and seamless integration of algorithms, increasing the flexibility of the system. This is further underscored by the ability to apply different algorithms to recorded EEG data to meet different application goals. High-Level Synthesis (HLS) was used to port algorithms to the FPGA, accelerating the algorithm development process and facilitating rapid implementation of algorithm variants. Evaluations have shown that the CereBridge system is capable of integrating the complete signal processing chain required for various BCI applications. Furthermore, it can operate continuously for more than 31 hours with a 1800mAh battery, making it a viable solution for long-term mobile EEG recording and real-world BCI studies.
Compared to existing research platforms, the CereBridge system offers unprecedented performance and features for a mobile BCI. It not only meets the relevant requirements for a mobile BCI system, but also paves the way for the rapid transition of algorithms from the laboratory to real-world applications. In essence, this work provides a comprehensive blueprint for the development and implementation of a state-of-the-art mobile EEG-based BCI system, setting a new benchmark in BCI hardware for real-world applicability
The representation of priors and decisions in the human parietal cortex
Animals actively sample their environment through orienting actions such as saccadic eye movements. Saccadic targets are selected based both on sensory evidence immediately preceding the saccade, and a âsalience mapâ or prior built-up over multiple saccades. In the primate cortex, the selection of each individual saccade depends on competition between target-selective cells that ramp up their firing rate to saccade release. However, it is less clear how a cross-saccade prior might be implemented, either in neural firing or through an activity-silent mechanism such as modification of synaptic weights on sensory inputs. Here, we present evidence from magnetoencephalography for 2 distinct processes underlying the selection of the current saccade, and the representation of the prior, in human parietal cortex. While the classic ramping decision process for each saccade was reflected in neural firing rates (measured in the event-related field), a prior built-up over multiple saccades was implemented via modulation of the gain on sensory inputs from the preferred target, as evidenced by rapid frequency tagging. A cascade of computations over time (initial representation of the prior, followed by evidence accumulation and then an integration of prior and evidence) provides a mechanism by which a salience map may be built up across saccades in parietal cortex. It also provides insight into the apparent contradiction that inactivation of parietal cortex has been shown not to affect performance on single-trials, despite the presence of clear evidence accumulation signals in this region
Deep Learning Techniques for Electroencephalography Analysis
In this thesis we design deep learning techniques for training deep neural networks on electroencephalography (EEG) data and in particular on two problems, namely EEG-based motor imagery decoding and EEG-based affect recognition, addressing challenges associated with them. Regarding the problem of motor imagery (MI) decoding, we first consider the various kinds of domain shifts in the EEG signals, caused by inter-individual differences (e.g. brain anatomy, personality and cognitive profile). These domain shifts render multi-subject training a challenging task and impede robust cross-subject generalization. We build a two-stage model ensemble architecture and propose two objectives to train it, combining the strengths of curriculum learning and collaborative training. Our subject-independent experiments on the large datasets of Physionet and OpenBMI, verify the effectiveness of our approach. Next, we explore the utilization of the spatial covariance of EEG signals through alignment techniques, with the goal of learning domain-invariant representations. We introduce a Riemannian framework that concurrently performs covariance-based signal alignment and data augmentation, while training a convolutional neural network (CNN) on EEG time-series. Experiments on the BCI IV-2a dataset show that our method performs superiorly over traditional alignment, by inducing regularization to the weights of the CNN. We also study the problem of EEG-based affect recognition, inspired by works suggesting that emotions can be expressed in relative terms, i.e. through ordinal comparisons between different affective state levels. We propose treating data samples in a pairwise manner to infer the ordinal relation between their corresponding affective state labels, as an auxiliary training objective. We incorporate our objective in a deep network architecture which we jointly train on the tasks of sample-wise classification and pairwise ordinal ranking. We evaluate our method on the affective datasets of DEAP and SEED and obtain performance improvements over deep networks trained without the additional ranking objective
Understanding local neuromuscular mechanisms that explain the efficacy of interventions for patellofemoral pain
Patellofemoral pain (PFP) is a common and persistent knee pain complaint among all age ranges, especially highly active people. Multiple approaches have been used to understand symptom persistence, including identifying a mechanism explaining intervention benefits (i.e. changes in specific deficits in groups that show symptomsâ improvement). Research has been conducted to identify the characteristics associated with PFP, but uncertainty regarding local neuromuscular characteristics remain evident. The thesis aimed to a) identify the local neuromuscular characteristics associated with PFP, b) develop an evidence informed laboratory protocol to detect those characteristics, c) establish protocol reliability and feasibility, and d) identify interventions that can target these neuromuscular characteristics. A systematic review with meta-analysis was completed to identify the neuromuscular characteristics of all muscles that cross the knee in people with PFP compared to uninjured groups. Ten deficits within three neuromuscular domains were found. Within the electromyography (EMG) domain, a delay in Vastus medialis (VM) relative to Vastus lateralis (VL) excitation onset, a high Biceps femoris (BF) mean excitation amplitude, and a lower Hoffman-reflex amplitude of VM were identified. Within the muscle performance domain, lower isometric, concentric, and eccentric extensors peak torque and total work, lower concentric flexors peak torque, and lower rate of torque development (RTD) to reach 30%, 60% and 90% of extensors peak torque were identified. Hamstring tightness was identified within muscle flexibility domain. The systematic review was published and the results used to inform testing protocol development. A second systematic review with meta-analysis was conducted to identify interventions that can target the local deficits associated with PFP. The results indicate that currently an intervention that effectively modifies EMG characteristics cannot be identified. Predominantly, exercise interventions have effects on strength and flexibility in PFP. Specifically, hip and knee targeted exercises are found to have a potential mechanism of benefit through both characteristics categories. A unique approach was introduced within the thesis to develop a deficit-detection protocol based on systematic review results. This approach provided a comprehensive analysis of the protocols from the studies that were included in the meta-analysis. A battery of tests was developed and included; a) VM-VL excitation onset timing in step-up task, b) BF mean excitation amplitude in single-leg triple-hop test, c) isometric, d) concentric and e) eccentric extensors peak torque, f) RTD to 30%, 60% and 90% of isometric peak torque, and hamstrings flexibility. Reliability testing of the deficit-detection protocol was conducted with both uninjured and participants with PFP over two phases. Phase one evaluated the original protocols adapted from the review. Phase two was performed on the EMG and RTD domains to explore the effects of signal processing parameters on reliability, such as; onset detection thresholds modification, unnormalised signals, and the addition of absolute RTD. For the PFP group: reliable results were demonstrated for concentric and eccentric extensors peak torque; RTD of the quadriceps at 25ms, 50ms and 90% of peak torque; and hamstrings flexibility. The uninjured group showed reliable results in: unnormalised BF mean excitation amplitude; all three peak torque tests; RTD to 30% of peak torque and at 150 and 175 milliseconds; and hamstrings flexibility. To establish participant recruitment rate and retention, in addition to the acceptability of the test protocol, a preliminary feasibility study of the deficit-detection protocol was conducted. A sample of 14 participants with PFP were recruited and tested at the Mile-end campus of QMUL before and after a six weeks period. Feasibility results indicate that 25.5% were willing to participate following an online screening process (n=17/55) and 82% met the eligibility criteria following face-to-face assessment (n=14/17). Recruitment rate was 0.5 participants per week and drop-out rate was 35.2% (n=11/17). The results indicate that the protocol did not meet all a-priori feasibility criteria, but the results can inform future research planning. The thesis has successfully identified local deficits associated with PFP, developed a test protocol that demonstrates reliability in evaluating these deficits and assessed the feasibility of the protocol in individuals with PFP. Interventions to cause change within these local deficits have been identified, with gap maps demonstrating where further research is required to better align the mechanisms of treatment effects with specific deficits associated with PFP
The effect of autologous macrophage therapy in cirrhosis in response to individual immune reparative pathways: developing a novel therapy
BACKGROUND:
Liver cirrhosis is the end stage of any injury process to the liver. Once established it inevitably progresses to complications such as portal hypertension, cancer and death. There is not cure for liver cirrhosis besides liver transplant. We face
an unmet demand for treatment of this condition. The role of macrophages in fibrosis
development and resolution in the liver has been extensively investigated. Prof Forbes group invested in the development of autologous macrophage product to promote
fibrosis resolution hence cirrhosis regression. This has demonstrated its efficacy and safety in animal models. From these encouraging pre-clinic data a phase 1 first in
human clinical trial of autologous activated macrophage product for cirrhotic patients was developed.
METHODS:
Using an established 3+3 dose escalation model we enrolled a total of 9 subject in the phase 1 trial reaching a maximum achieved and safe dose of 1x10^9 macrophages. In addition to adverse events, dose toxicity and macrophage activation
syndrome (MAS) parameter, we evaluated a varied range of circulating cytokines and chemokine pre and post treatment using a commercial kit. Moreover we developed a protocol for P13- magnetic resonance spectrometry (MRS) for the analysis of the
metabolically active liver parenchyma. Data from the phase 1 trial were used to improve the autologous cellular produce and phase 2 randomised controlled trial.
RESULTS:
The autologous activated macrophage produce is demonstrated not to cause any toxicity in this first in human study of cirrhotic population of different aetiology. Cytokine and chemokine analysis supports these findings and specifically demonstrates low levels of IL-8, which represent cardinal feature of MAS. Other
interesting cytokine signals may support extra cellular matrix remodelling effect of the autologous macrophage product infusion. In addition we demonstrated a reproducible protocol for MRS in liver disease.
DISCUSSION:
Autologous activated macrophage infusion did not result in any toxicity in cirrhotic subjects taking part in this study and shows preliminary signs of efficacy in fibrosis resolution both clinically and biochemically. This work places the basis of development of cellular products for treatment of cirrhosis and fibrosis and provides invaluable insight in immune response to cellular treatment
Developing an fMRI paradigm for studying reinforcement learning with gustatory stimuli
One of the main challenges for global public health in the modern world is the rising prevalence of obesity. Obtaining a better understanding of the dysregulated feeding behaviour that leads to obesity, by investigating the decision making and learning processes underlying it, could advance our capabilities in battling the obesity epidemic. Consequently, our aim in this study is to design an experiment that could evaluate these processes.
We examined ten healthy participants using a modified version of the "probabilistic selection task". We used gustatory stimuli as a replacement for monetary rewards, to assess the effect of nutritional rewards on the learning behaviour. We subsequently analysed the behavioural results with computational modelling and combined this with imaging data simultaneously acquired with a functional magnetic resonance imaging (fMRI) multiband sequence.
All participants in this study succeeded in interpreting and interacting with the gustatory stimuli appropriately. Performance on the task was affected by the subjective valuation of the reward. Participants whose motivation to drink the reward and liking of its taste decreased during the task presented difficulties correctly choosing the more rewarding cues.
Computational modelling of the behaviour found that the so-called asymmetric learning model, in which positive and negative reinforcement are differently weighted, best explained the group. The acquired fMRI data was suboptimal and we did not detect the neurological activity we expected in the reward system, which is central to our scientific question.
Thus, our study shows it is possible to implement the PST with gustatory stimuli. However, to evaluate the corresponding neurological activity, our fMRI configuration requires improvement. An optimised system could be used in further studies to improve our understanding of the neurobiological mechanisms of learning that lead to obesity and elucidate the role of food as a distinctive reinforcer
Recommended from our members
Methane Production by Methanogens In Simulated Subsurface Martian Environments
Methane has a typical atmospheric photochemical lifetime of âŒ300 years on Mars, making contemporary reported detections (and non-detections) of methane a fiercely debated topic, due to the potential need for a present-day source. On Earth, most methane is produced by methanogenic microbes present in, e.g., ruminants, wetlands, lakes and permafrost. Of the four metabolic pathways on Earth, the hydrogenotrophic pathway is the most common, utilising CO2 and H2 as substrates. Both gases are present on Mars, plus liquid water and essential elements (CHNOPS) that are requirements for life, and organics. Surface conditions on Mars are sterilising, however, the temperature and pressure of the subsurface are potentially favourable to life and provide a shield to sterilising surface conditions, and are thus a possible habitat for methanogens. A meta-analysis was
conducted, motivated by these subsurface parameters, that redefined the statistical representation for several growth parameters for all type-strains of methanogens and analysed multiple parameters
simultaneously across multiple categories (e.g. metabolism), showing that the optimal average conditions in which to grow methanogens would be a meso-temperate (20 to 39âŠC), hypersaline and slightly acidic environment. Two martian subsurface environments were simulated to determine whether environmental or chemical factors are inhibitory to methanogenesis. (1) Methanoculleus marisnigri was grown in a custom-built, high-pressure manifold at 60 bar and 25âŠC to simulate the subsurface of Mars, although no methane was produced, due to a technical issue resulting in oxygenated medium. However, some cells survived five weeks of oxygenation. (2) Methanothermococcus okinawensis was grown in a simulated chemical environment at 1 bar and 60âŠC, that included a regolith simulant, a proposed martian brine and a Mars-relevant organic source (carbonaceous chondrite). The simulated parameters of the chemical environment of subsurface Mars were not inhibitory to hydrogenotrophic methanogenesis, suggesting it is feasible (from a metabolic perspective) that subsurface methanogens could be producing contemporary methane on Mars
Machine learning applications in search algorithms for gravitational waves from compact binary mergers
Gravitational waves from compact binary mergers are now routinely observed by Earth-bound detectors. These observations enable exciting new science, as they have opened a new window to the Universe.
However, extracting gravitational-wave signals from the noisy detector data is a challenging problem. The most sensitive search algorithms for compact binary mergers use matched filtering, an algorithm that compares the data with a set of expected template signals. As detectors are upgraded and more sophisticated signal models become available, the number of required templates will increase, which can make some sources computationally prohibitive to search for. The computational cost is of particular concern when low-latency alerts should be issued to maximize the time for electromagnetic follow-up observations. One potential solution to reduce computational requirements that has started to be explored in the last decade is machine learning. However, different proposed deep learning searches target varying parameter spaces and use metrics that are not always comparable to existing literature. Consequently, a clear picture of the capabilities of machine learning searches has been sorely missing.
In this thesis, we closely examine the sensitivity of various deep learning gravitational-wave search algorithms and introduce new methods to detect signals from binary black hole and binary neutron star mergers at previously untested statistical confidence levels. By using the sensitive distance as our core metric, we allow for a direct comparison of our algorithms to state-of-the-art search pipelines. As part of this thesis, we organized a global mock data challenge to create a benchmark for machine learning search algorithms targeting compact binaries. This way, the tools developed in this thesis are made available to the greater community by publishing them as open source software.
Our studies show that, depending on the parameter space, deep learning gravitational-wave search algorithms are already competitive with current production search pipelines. We also find that strategies developed for traditional searches can be effectively adapted to their machine learning counterparts. In regions where matched filtering becomes computationally expensive, available deep learning algorithms are also limited in their capability. We find reduced sensitivity to long duration signals compared to the excellent results for short-duration binary black hole signals
- âŠ