2,691 research outputs found

    Utilizing Machine Learning Tools for calm water resistance prediction and design optimization of a fast catamaran ferry

    Get PDF
    The article aims to design a calm water resistance predictor based on Machine Learning (ML) Tools and develop a systematic series for battery-driven catamaran hullforms. Additionally, employing a machine learning predictor for design optimization through the utilization of a Genetic Algorithm (GA) in an expedited manner. Regression Trees (RTs), Support Vector Machines (SVMs), and Artificial Neural Network (ANN) regression models are applied for dataset training. A hullform optimization was implemented for various catamarans, including dimensional and hull coefficient parameters based on resistance, structural weight reduction, and battery performance improvement. Design distribution based on Lackenby transformation fulfills all of the design space, and sequentially, a novel self-blending method reconstructs new hullforms based on two parents blending. Finally, a machine learning approach was conducted on the generated data of the case study. This study shows that the ANN algorithm correlates well with the measured resistance. Accordingly, by choosing any new design based on owner requirements, GA optimization obtained the final optimum design by using an ML fast resistance calculator. The optimization process was conducted on a 40 m passenger catamaran case study that achieved a 9.5% cost function improvement. Results show that incorporating the ML tool into the GA optimization process accelerates the ship design process

    Multimodal acoustic-electric trigeminal nerve stimulation modulates conscious perception

    Get PDF
    Multimodal stimulation can reverse pathological neural activity and improve symptoms in neuropsychiatric diseases. Recent research shows that multimodal acoustic-electric trigeminal-nerve stimulation (TNS) (i.e., musical stimulation synchronized to electrical stimulation of the trigeminal nerve) can improve consciousness in patients with disorders of consciousness. However, the reliability and mechanism of this novel approach remain largely unknown. We explored the effects of multimodal acoustic-electric TNS in healthy human participants by assessing conscious perception before and after stimulation using behavioral and neural measures in tactile and auditory target-detection tasks. To explore the mechanisms underlying the putative effects of acoustic-electric stimulation, we fitted a biologically plausible neural network model to the neural data using dynamic causal modeling. We observed that (1) acoustic-electric stimulation improves conscious tactile perception without a concomitant change in auditory perception, (2) this improvement is caused by the interplay of the acoustic and electric stimulation rather than any of the unimodal stimulation alone, and (3) the effect of acoustic-electric stimulation on conscious perception correlates with inter-regional connection changes in a recurrent neural processing model. These results provide evidence that acoustic-electric TNS can promote conscious perception. Alterations in inter-regional cortical connections might be the mechanism by which acoustic-electric TNS achieves its consciousness benefits

    Sound Event Detection by Exploring Audio Sequence Modelling

    Get PDF
    Everyday sounds in real-world environments are a powerful source of information by which humans can interact with their environments. Humans can infer what is happening around them by listening to everyday sounds. At the same time, it is a challenging task for a computer algorithm in a smart device to automatically recognise, understand, and interpret everyday sounds. Sound event detection (SED) is the process of transcribing an audio recording into sound event tags with onset and offset time values. This involves classification and segmentation of sound events in the given audio recording. SED has numerous applications in everyday life which include security and surveillance, automation, healthcare monitoring, multimedia information retrieval, and assisted living technologies. SED is to everyday sounds what automatic speech recognition (ASR) is to speech and automatic music transcription (AMT) is to music. The fundamental questions in designing a sound recognition system are, which portion of a sound event should the system analyse, and what proportion of a sound event should the system process in order to claim a confident detection of that particular sound event. While the classification of sound events has improved a lot in recent years, it is considered that the temporal-segmentation of sound events has not improved in the same extent. The aim of this thesis is to propose and develop methods to improve the segmentation and classification of everyday sound events in SED models. In particular, this thesis explores the segmentation of sound events by investigating audio sequence encoding-based and audio sequence modelling-based methods, in an effort to improve the overall sound event detection performance. In the first phase of this thesis, efforts are put towards improving sound event detection by explicitly conditioning the audio sequence representations of an SED model using sound activity detection (SAD) and onset detection. To achieve this, we propose multi-task learning-based SED models in which SAD and onset detection are used as auxiliary tasks for the SED task. The next part of this thesis explores self-attention-based audio sequence modelling, which aggregates audio representations based on temporal relations within and between sound events, scored on the basis of the similarity of sound event portions in audio event sequences. We propose SED models that include memory-controlled, adaptive, dynamic, and source separation-induced self-attention variants, with the aim to improve overall sound recognition

    Dissecting Extracellular Matrix Internalisation Mechanisms using Functional Genomics

    Get PDF
    Breast and ovarian malignancies account for one third of female cancers. The role of the stroma in supporting invasive growth in breast cancer has become clear. Breast cancer cells interact and respond to the cues from the surrounding extracellular matrix (ECM). Integrins are main cell adhesion receptors and key players in invasive migration by linking the ECM to the actin cytoskeleton. In addition, integrins mediate distinctive biochemical and biomechanical signals to support cancer invasion. The role of matrix proteases in promoting ECM degradation and cancer dissemination has been extensively studied; however, cancer cells possess additional means to support those processes, such as integrin-mediated ECM endocytosis and consequent degradation in the lysosomes. Internalisation of the extracellular matrix is upregulated in invasive breast cancer. Nonetheless, the mechanisms by which cancer cells regulate this process are poorly understood. We developed a high throughput pH sensitive system to detect ECM uptake. Here, we show that MDA-MB-231 breast cancer cells converge in macropinocytosis to internalise diverse ECM components and we confirm that this process is modulated by PAK1. To unravel which ECM components breast cancer cells internalise in a complex environment (namely, cell derived matrices), we performed mass spectrometry. Proteomic analysis identified Annexin A6, Collagen VI, Tenascin C and fibronectin, among other matrisome proteins, to be internalised by invasive breast cancer cells. Following ECM endocytosis, ECM is targeted for lysosomal degradation. To unravel the molecular mechanisms behind this process, we performed a trafficking screen and identified the AP3 complex, VAMP7, Arf1 and ARFGEF2. Our results suggest that the AP3 complex may regulate ECM-integrin delivery to lysosomes. To gain more insight on the signalling pathways governing macropinocytosis in breast cancer cells, we performed a kinase and phosphatase screen that unravelled MAP3K1 and PPP2R1A, a subunit of protein phosphatase 2A (PP2A) as relevant regulators of ECM endocytosis. Furthermore, our data suggests that p38 mitogen-activated protein kinase (MAPK) activation upon binding to the ECM is required for ECM macropinocytosis. Outstandingly, inhibiting p38 MAPK led to profound changes in the ability of breast cancer cells to migrate in cell derived matrices. Previous work from the Rainero lab focused on characterising the receptors involved in ECM internalisation; α2β1 integrin was identified as the main regulator of ECM uptake in MDA-MB-231 cells. In particular, α2β1 integrin has been shown to activate p38 MAPK pathway. Taken together, we hypothesise that binding of ECM to α2β1 integrin results in the activation of PAK1 and MAP3K1, which in turn leads to ECM endocytosis. p38 MAPK activity may induce changes in actin polymerisation via PPP2R1A and/or focal adhesion turnover, which consequently promotes ECM macropinocytosis and invasive migration

    Pan-cancer analysis of post-translational modifications reveals shared patterns of protein regulation

    Get PDF
    Post-translational modifications (PTMs) play key roles in regulating cell signaling and physiology in both normal and cancer cells. Advances in mass spectrometry enable high-throughput, accurate, and sensitive measurement of PTM levels to better understand their role, prevalence, and crosstalk. Here, we analyze the largest collection of proteogenomics data from 1,110 patients with PTM profiles across 11 cancer types (10 from the National Cancer Institute\u27s Clinical Proteomic Tumor Analysis Consortium [CPTAC]). Our study reveals pan-cancer patterns of changes in protein acetylation and phosphorylation involved in hallmark cancer processes. These patterns revealed subsets of tumors, from different cancer types, including those with dysregulated DNA repair driven by phosphorylation, altered metabolic regulation associated with immune response driven by acetylation, affected kinase specificity by crosstalk between acetylation and phosphorylation, and modified histone regulation. Overall, this resource highlights the rich biology governed by PTMs and exposes potential new therapeutic avenues

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Software Product Line Engineering via Software Transplantation

    Full text link
    For companies producing related products, a Software Product Line (SPL) is a software reuse method that improves time-to-market and software quality, achieving substantial cost reductions.These benefits do not come for free. It often takes years to re-architect and re-engineer a codebase to support SPL and, once adopted, it must be maintained. Current SPL practice relies on a collection of tools, tailored for different reengineering phases, whose output developers must coordinate and integrate. We present Foundry, a general automated approach for leveraging software transplantation to speed conversion to and maintenance of SPL. Foundry facilitates feature extraction and migration. It can efficiently, repeatedly, transplant a sequence of features, implemented in multiple files. We used Foundry to create two valid product lines that integrate features from three real-world systems in an automated way. Moreover, we conducted an experiment comparing Foundry's feature migration with manual effort. We show that Foundry automatically migrated features across codebases 4.8 times faster, on average, than the average time a group of SPL experts took to accomplish the task

    Runway Safety Improvements Through a Data Driven Approach for Risk Flight Prediction and Simulation

    Get PDF
    Runway overrun is one of the most frequently occurring flight accident types threatening the safety of aviation. Sensors have been improved with recent technological advancements and allow data collection during flights. The recorded data helps to better identify the characteristics of runway overruns. The improved technological capabilities and the growing air traffic led to increased momentum for reducing flight risk using artificial intelligence. Discussions on incorporating artificial intelligence to enhance flight safety are timely and critical. Using artificial intelligence, we may be able to develop the tools we need to better identify runway overrun risk and increase awareness of runway overruns. This work seeks to increase attitude, skill, and knowledge (ASK) of runway overrun risks by predicting the flight states near touchdown and simulating the flight exposed to runway overrun precursors. To achieve this, the methodology develops a prediction model and a simulation model. During the flight training process, the prediction model is used in flight to identify potential risks and the simulation model is used post-flight to review the flight behavior. The prediction model identifies potential risks by predicting flight parameters that best characterize the landing performance during the final approach phase. The predicted flight parameters are used to alert the pilots for any runway overrun precursors that may pose a threat. The predictions and alerts are made when thresholds of various flight parameters are exceeded. The flight simulation model simulates the final approach trajectory with an emphasis on capturing the effect wind has on the aircraft. The focus is on the wind since the wind is a relatively significant factor during the final approach; typically, the aircraft is stabilized during the final approach. The flight simulation is used to quickly assess the differences between fight patterns that have triggered overrun precursors and normal flights with no abnormalities. The differences are crucial in learning how to mitigate adverse flight conditions. Both of the models are created with neural network models. The main challenges of developing a neural network model are the unique assignment of each model design space and the size of a model design space. A model design space is unique to each problem and cannot accommodate multiple problems. A model design space can also be significantly large depending on the depth of the model. Therefore, a hyperparameter optimization algorithm is investigated and used to design the data and model structures to best characterize the aircraft behavior during the final approach. A series of experiments are performed to observe how the model accuracy change with different data pre-processing methods for the prediction model and different neural network models for the simulation model. The data pre-processing methods include indexing the data by different frequencies, by different window sizes, and data clustering. The neural network models include simple Recurrent Neural Networks, Gated Recurrent Units, Long Short Term Memory, and Neural Network Autoregressive with Exogenous Input. Another series of experiments are performed to evaluate the robustness of these models to adverse wind and flare. This is because different wind conditions and flares represent controls that the models need to map to the predicted flight states. The most robust models are then used to identify significant features for the prediction model and the feasible control space for the simulation model. The outcomes of the most robust models are also mapped to the required landing distance metric so that the results of the prediction and simulation are easily read. Then, the methodology is demonstrated with a sample flight exposed to an overrun precursor, and high approach speed, to show how the models can potentially increase attitude, skill, and knowledge of runway overrun risk. The main contribution of this work is on evaluating the accuracy and robustness of prediction and simulation models trained using Flight Operational Quality Assurance (FOQA) data. Unlike many studies that focused on optimizing the model structures to create the two models, this work optimized both data and model structures to ensure that the data well capture the dynamics of the aircraft it represents. To achieve this, this work introduced a hybrid genetic algorithm that combines the benefits of conventional and quantum-inspired genetic algorithms to quickly converge to an optimal configuration while exploring the design space. With the optimized model, this work identified the data features, from the final approach, with a higher contribution to predicting airspeed, vertical speed, and pitch angle near touchdown. The top contributing features are altitude, angle of attack, core rpm, and air speeds. For both the prediction and the simulation models, this study goes through the impact of various data preprocessing methods on the accuracy of the two models. The results may help future studies identify the right data preprocessing methods for their work. Another contribution from this work is on evaluating how flight control and wind affect both the prediction and the simulation models. This is achieved by mapping the model accuracy at various levels of control surface deflection, wind speeds, and wind direction change. The results saw fairly consistent prediction and simulation accuracy at different levels of control surface deflection and wind conditions. This showed that the neural network-based models are effective in creating robust prediction and simulation models of aircraft during the final approach. The results also showed that data frequency has a significant impact on the prediction and simulation accuracy so it is important to have sufficient data to train the models in the condition that the models will be used. The final contribution of this work is on demonstrating how the prediction and the simulation models can be used to increase awareness of runway overrun.Ph.D
    corecore