2,250 research outputs found

    Biblio-Analysis of Cohort Intelligence (CI) Algorithm and its allied applications from Scopus and Web of Science Perspective

    Full text link
    Cohort Intelligence or CI is one of its kind of novel optimization algorithm. Since its inception, in a very short span it is applied successfully in various domains and its results are observed to be effectual in contrast to algorithm of its kind. Till date, there is no such type of bibliometric analysis carried out on CI and its related applications. So, this research paper in a way will be an ice breaker for those who want to take up CI to a new level. In this research papers, CI publications available in Scopus are analyzed through graphs, networked diagrams about authors, source titles, keywords over the years, journals over the time. In a way this bibliometric paper showcase CI, its applications and detail outs systematic review in terms its bibliometric details

    Optimisation of wind turbine blade structures using a genetic algorithm

    Get PDF
    The current diminution of fossil-fuel reserves, stricter environmental guidelines and the world’s ever-growing energy needs have directed to the deployment of alternative renewable energy sources. Among the many renewable energies, wind energy is one of the most promising and the fastest growing installed alternative-energy production technology. In order to meet the production goals in the next few decades, both significant increases in wind turbine installations and operability are required, while maintaining a profitable and competitive energy cost. As the size of the wind turbine rotor increases, the structural performance and durability requirements tend to become more challenging. In this sense, solving the wind turbine design problem is an optimization problem where an optimal solution is to be found under a set of design constraints and a specific target. Seen the world evolution towards the renewable energies and the beginning of an implementation of a local wind industry in Quebec, it becomes imperative to follow the international trends in this industry. Therefore, it is necessary to supply the designers a suitable decision tool for the study and design of optimal wind turbine blades. The developed design tool is an open source code named winDesign which is capable to perform structural analysis and design of composite blades for wind turbines under various configurations in order to accelerate the preliminary design phase. The proposed tool is capable to perform a Pareto optimization where optimal decisions need to be taken in the presence of trade-offs between two conflicting objectives: the annual energy production and the weight of the blade. For a given external blade shape, winDesign is able to determine an optimal composite layup, chord and twist distributions which either minimizes blade mass or maximizes the annual energy production while simultaneously satisfying design constraints. The newly proposed graphical tool incorporates two novel VCH and KGA techniques and is validated with numerical simulation on both mono-objective and multi-objective optimization problems

    Learning from limited labelled data: contributions to weak, few-shot, and unsupervised learning

    Full text link
    Tesis por compendio[ES] En la última década, el aprendizaje profundo (DL) se ha convertido en la principal herramienta para las tareas de visión por ordenador (CV). Bajo el paradigma de aprendizaje supervisado, y gracias a la recopilación de grandes conjuntos de datos, el DL ha alcanzado resultados impresionantes utilizando redes neuronales convolucionales (CNNs). Sin embargo, el rendimiento de las CNNs disminuye cuando no se dispone de suficientes datos, lo cual dificulta su uso en aplicaciones de CV en las que sólo se dispone de unas pocas muestras de entrenamiento, o cuando el etiquetado de imágenes es una tarea costosa. Estos escenarios motivan la investigación de estrategias de aprendizaje menos supervisadas. En esta tesis, hemos explorado diferentes paradigmas de aprendizaje menos supervisados. Concretamente, proponemos novedosas estrategias de aprendizaje autosupervisado en la clasificación débilmente supervisada de imágenes histológicas gigapixel. Por otro lado, estudiamos el uso del aprendizaje por contraste en escenarios de aprendizaje de pocos disparos para la vigilancia automática de cruces de ferrocarril. Por último, se estudia la localización de lesiones cerebrales en el contexto de la segmentación no supervisada de anomalías. Asimismo, prestamos especial atención a la incorporación de conocimiento previo durante el entrenamiento que pueda mejorar los resultados en escenarios menos supervisados. En particular, introducimos proporciones de clase en el aprendizaje débilmente supervisado en forma de restricciones de desigualdad. Además, se incorpora la homogeneización de la atención para la localización de anomalías mediante términos de regularización de tamaño y entropía. A lo largo de esta tesis se presentan diferentes métodos menos supervisados de DL para CV, con aportaciones sustanciales que promueven el uso de DL en escenarios con datos limitados. Los resultados obtenidos son prometedores y proporcionan a los investigadores nuevas herramientas que podrían evitar la anotación de cantidades masivas de datos de forma totalmente supervisada.[CA] En l'última dècada, l'aprenentatge profund (DL) s'ha convertit en la principal eina per a les tasques de visió per ordinador (CV). Sota el paradigma d'aprenentatge supervisat, i gràcies a la recopilació de grans conjunts de dades, el DL ha aconseguit resultats impressionants utilitzant xarxes neuronals convolucionals (CNNs). No obstant això, el rendiment de les CNNs disminueix quan no es disposa de suficients dades, la qual cosa dificulta el seu ús en aplicacions de CV en les quals només es disposa d'unes poques mostres d'entrenament, o quan l'etiquetatge d'imatges és una tasca costosa. Aquests escenaris motiven la investigació d'estratègies d'aprenentatge menys supervisades. En aquesta tesi, hem explorat diferents paradigmes d'aprenentatge menys supervisats. Concretament, proposem noves estratègies d'aprenentatge autosupervisat en la classificació feblement supervisada d'imatges histològiques gigapixel. D'altra banda, estudiem l'ús de l'aprenentatge per contrast en escenaris d'aprenentatge de pocs trets per a la vigilància automàtica d'encreuaments de ferrocarril. Finalment, s'estudia la localització de lesions cerebrals en el context de la segmentació no supervisada d'anomalies. Així mateix, prestem especial atenció a la incorporació de coneixement previ durant l'entrenament que puga millorar els resultats en escenaris menys supervisats. En particular, introduïm proporcions de classe en l'aprenentatge feblement supervisat en forma de restriccions de desigualtat. A més, s'incorpora l'homogeneïtzació de l'atenció per a la localització d'anomalies mitjançant termes de regularització de grandària i entropia. Al llarg d'aquesta tesi es presenten diferents mètodes menys supervisats de DL per a CV, amb aportacions substancials que promouen l'ús de DL en escenaris amb dades limitades. Els resultats obtinguts són prometedors i proporcionen als investigadors noves eines que podrien evitar l'anotació de quantitats massives de dades de forma totalment supervisada.[EN] In the last decade, deep learning (DL) has become the main tool for computer vision (CV) tasks. Under the standard supervised learnng paradigm, and thanks to the progressive collection of large datasets, DL has reached impressive results on different CV applications using convolutional neural networks (CNNs). Nevertheless, CNNs performance drops when sufficient data is unavailable, which creates challenging scenarios in CV applications where only few training samples are available, or when labeling images is a costly task, that require expert knowledge. Those scenarios motivate the research of not-so-supervised learning strategies to develop DL solutions on CV. In this thesis, we have explored different less-supervised learning paradigms on different applications. Concretely, we first propose novel self-supervised learning strategies on weakly supervised classification of gigapixel histology images. Then, we study the use of contrastive learning on few-shot learning scenarios for automatic railway crossing surveying. Finally, brain lesion segmentation is studied in the context of unsupervised anomaly segmentation, using only healthy samples during training. Along this thesis, we pay special attention to the incorporation of tasks-specific prior knowledge during model training, which may be easily obtained, but which can substantially improve the results in less-supervised scenarios. In particular, we introduce relative class proportions in weakly supervised learning in the form of inequality constraints. Also, attention homogenization in VAEs for anomaly localization is incorporated using size and entropy regularization terms, to make the CNN to focus on all patterns for normal samples. The different methods are compared, when possible, with their supervised counterparts. In short, different not-so-supervised DL methods for CV are presented along this thesis, with substantial contributions that promote the use of DL in data-limited scenarios. The obtained results are promising, and provide researchers with new tools that could avoid annotating massive amounts of data in a fully supervised manner.The work of Julio Silva Rodríguez to carry out this research and to elaborate this dissertation has been supported by the Spanish Government under the FPI Grant PRE2018-083443.Silva Rodríguez, JJ. (2022). Learning from limited labelled data: contributions to weak, few-shot, and unsupervised learning [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/190633Compendi

    Acquisition and distribution of synergistic reactive control skills

    Get PDF
    Learning from demonstration is an afficient way to attain a new skill. In the context of autonomous robots, using a demonstration to teach a robot accelerates the robot learning process significantly. It helps to identify feasible solutions as starting points for future exploration or to avoid actions that lead to failure. But the acquisition of pertinent observationa is predicated on first segmenting the data into meaningful sequences. These segments form the basis for learning models capable of recognising future actions and reconstructing the motion to control a robot. Furthermore, learning algorithms for generative models are generally not tuned to produce stable trajectories and suffer from parameter redundancy for high degree of freedom robots This thesis addresses these issues by firstly investigating algorithms, based on dynamic programming and mixture models, for segmentation sensitivity and recognition accuracy on human motion capture data sets of repetitive and categorical motion classes. A stability analysis of the non-linear dynamical systems derived from the resultant mixture model representations aims to ensure that any trajectories converge to the intended target motion as observed in the demonstrations. Finally, these concepts are extended to humanoid robots by deploying a factor analyser for each mixture model component and coordinating the structure into a low dimensional representation of the demonstrated trajectories. This representation can be constructed as a correspondence map is learned between the demonstrator and robot for joint space actions. Applying these algorithms for demonstrating movement skills to robot is a further step towards autonomous incremental robot learning

    Machine learning approaches to optimise the management of patients with sepsis

    Get PDF
    The goal of this PhD was to generate novel tools to improve the management of patients with sepsis, by applying machine learning techniques on routinely collected electronic health records. Machine learning is an application of artificial intelligence (AI), where a machine analyses data and becomes able to execute complex tasks without being explicitly programmed. Sepsis is the third leading cause of death worldwide and the main cause of mortality in hospitals, but the best treatment strategy remains uncertain. In particular, evidence suggests that current practices in the administration of intravenous fluids and vasopressors are suboptimal and likely induce harm in a proportion of patients. This represents a key clinical challenge and a top research priority. The main contribution of the research has been the development of a reinforcement learning framework and algorithms, in order to tackle this sequential decision-making problem. The model was built and then validated on three large non-overlapping intensive care databases, containing data collected from adult patients in the U.S.A and the U.K. Our agent extracted implicit knowledge from an amount of patient data that exceeds many-fold the life-time experience of human clinicians and learned optimal treatment by having analysed myriads of (mostly sub-optimal) treatment decisions. We used state-of-the-art evaluation techniques (called high confidence off-policy evaluation) and demonstrated that the value of the treatment strategy of the AI agent was on average reliably higher than the human clinicians. In two large validation cohorts independent from the training data, mortality was the lowest in patients where clinicians’ actual doses matched the AI policy. We also gained insight into the model representations and confirmed that the AI agent relied on clinically and biologically meaningful parameters when making its suggestions. We conducted extensive testing and exploration of the behaviour of the AI agent down to the level of individual patient trajectories, identified potential sources of inappropriate behaviour and offered suggestions for future model refinements. If validated, our model could provide individualized and clinically interpretable treatment decisions for sepsis that may improve patient outcomes.Open Acces

    Determining jumping performance from a single body-worn accelerometer using machine learning

    Get PDF
    External peak power in the countermovement jump is frequently used to monitor athlete training. The gold standard method uses force platforms, but they are unsuitable for field-based testing. However, alternatives based on jump flight time or Newtonian methods applied to inertial sensor data have not been sufficiently accurate for athlete monitoring. Instead, we developed a machine learning model based on characteristic features (functional principal components) extracted from a single body-worn accelerometer. Data were collected from 69 male and female athletes at recreational, club or national levels, who performed 696 jumps in total. We considered vertical countermovement jumps (with and without arm swing),sensor anatomical locations, machine learning models and whether to use resultant or triaxial signals. Using a novel surrogate model optimisation procedure, we obtained the lowest errors with a support vector machine when using the resultant signal from a lower back sensor in jumps without arm swing. This model had a peak power RMSE of 2.3 W·kg-1 (5.1% of the mean), estimated using nested cross validation and supported by an independent holdout test (2.0 W·kg-1). This error is lower than in previous studies, although it is not yet sufficiently accurate for a field-based method. Our results demonstrate that functional data representations work well in machine learning by reducing model complexity in applications where signals are aligned in time. Our optimisation procedure also was shown to be robust can be used in wider applications with low-cost, noisy objective functions

    Blending generative models with deep learning for multidimensional phenotypic prediction from brain connectivity data

    Get PDF
    Network science as a discipline has provided us with foundational machinery to study complex relational entities such as social networks, genomics, econometrics etc. The human brain is a complex network that has recently garnered immense interest within the data science community. Connectomics or the study of the underlying connectivity patterns in the brain has become an important field of study for the characterization of various neurological disorders such as Autism, Schizophrenia etc. Such connectomic studies have provided several fundamental insights into its intrinsic organisation and implications on our behavior and health. This thesis proposes a collection of mathematical models that are capable of fusing information from functional and structural connectivity with phenotypic information. Here, functional connectivity is measured by resting state functional MRI (rs-fMRI), while anatomical connectivity is captured using Diffusion Tensor Imaging (DTI). The phenotypic information of interest could refer to continuous measures of behavior or cognition, or may capture levels of impairment in the case of neuropsychiatric disorders. We first develop a joint network optimization framework to predict clinical severity from rs-fMRI connectivity matrices. This model couples two key terms into a unified optimization framework: a generative matrix factorization and a discriminative linear regression model. We demonstrate that the proposed joint inference strategy is successful in generalizing to prediction of impairments in Autism Spectrum Disorder (ASD) when compared with several machine learning, graph theoretic and statistical baselines. At the same time, the model is capable of extracting functional brain biomarkers that are informative of individual measures of clinical severity. We then present two modeling extensions to non-parametric and neural network regression models that are coupled with the same generative framework. Building on these general principles, we extend our framework to incorporate multimodal information from Diffusion Tensor Imaging (DTI) and dynamic functional connectivity. At a high level, our generative matrix factorization now estimates a time-varying functional decomposition. At the same time, it is guided by anatomical connectivity priors in a graph-based regularization setup. This connectivity model is coupled with a deep network that predicts multidimensional clinical characterizations and models the temporal dynamics of the functional scan. This framework allows us to simultaneously explain multiple impairments, isolate stable multi-modal connectivity signatures, and study the evolution of various brain states at rest. Lastly, we shift our focus to end-to-end geometric frameworks. These are designed to characterize the complementarity between functional and structural connectivity data spaces, while using clinical information as a secondary guide. As an alternative to the previous generative framework for functional connectivity, our representation learning scheme of choice is a matrix autoencoder that is crafted to reflect the underlying data geometry. This is coupled with a manifold alignment model that maps from function to structure and a deep network that maps to phenotypic information. We demonstrate that the model reliably recovers structural connectivity patterns across individuals, while robustly extracting predictive yet interpretable brain biomarkers. Finally, we also present a preliminary analytical and experimental exposition on the theoretical aspects of the matrix autoencoder representation

    Doctor of Philosophy

    Get PDF
    dissertationThe human brain is the seat of cognition and behavior. Understanding the brain mechanistically is essential for appreciating its linkages with cognitive processes and behavioral outcomes in humans. Mechanisms of brain function categorically represent rich and widely under-investigated biological substrates for neural-driven studies of psychiatry and mental health. Research examining intrinsic connectivity patterns across whole brain systems utilizes functional magnetic resonance imaging (fMRI) to trace spontaneous fluctuations in blood oxygen-level dependent (BOLD) signals. In the first study presented, we reveal patterns of dynamic attractors in resting state functional connectivity data corresponding to well-documented biological networks. We introduce a novel simulation for whole brain dynamics that can be adapted to either group-level analysis or single-subject level models. We describe stability of intrinsic functional architecture in terms of transient and global steady states resembling biological networks. In the second study, we demonstrate plasticity in functional connectivity following a minimum six-week intervention to train cognitive performance in a speed reading task. Long-term modulation of connectivity with language regions indicate functional connectivity as a candidate biomarker for tracking and measuring functional changes in neural systems as outcomes of cognitive training. The third study demonstrates utility of functional biomarkers in predicting individual differences in behavioral and cognitive features. We successfully predict three major domains of personality psychologyintelligence, agreeableness, and conscientiousnessin individual subjects using a large (N=475) open source data sample compiled by the National Institutes of Healths Human Connectome Project

    Generalisable FPCA-based Models for Predicting Peak Power in Vertical Jumping using Accelerometer Data

    Get PDF
    Peak power in the countermovement jump is correlated with various measures of sports performance and can be used to monitor athlete training. The gold standard method for determining peak power uses force platforms, but they are unsuitable for field-based testing favoured by practitioners. Alternatives include predicting peak power from jump flight times, or using Newtonian methods based on body-worn inertial sensor data, but so far neither has yielded sufficiently accurate estimates. This thesis aims to develop a generalisable model for predicting peak power based on Functional Principal Component Analysis applied to body-worn accelerometer data. Data was collected from 69 male and female adults, engaged in sports at recreational, club or national levels. They performed up to 16 countermovement jumps each, with and without arm swing, 696 jumps in total. Peak power criterion measures were obtained from force platforms, and characteristic features from accelerometer data were extracted from four sensors attached to the lower back, upper back and both shanks. The best machine learning algorithm, jump type and sensor anatomical location were determined in this context. The investigation considered signal representation (resultant, triaxial or a suitable transform), preprocessing (smoothing, time window and curve registration), feature selection and data augmentation (signal rotations and SMOTER). A novel procedure optimised the model parameters based on Particle Swarm applied to a surrogate Gaussian Process model. Model selection and evaluation were based on nested cross validation (Monte Carlo design). The final optimal model had an RMSE of 2.5 W·kg-1, which compares favourably to earlier research (4.9 ± 1.7 W·kg-1 for flight-time formulae and 10.7 ± 6.3 W·kg-1 for Newtonian sensor-based methods). Whilst this is not yet sufficiently accurate for applied practice, this thesis has developed and comprehensively evaluated new techniques, which will be valuable to future biomechanical applications
    corecore