158 research outputs found

    Characterisation and modulation of drug resistance in lung cancer cell lines

    Get PDF
    Chemotherapy drug resistance is a major obstacle in the treatment of cancer. It can result from an increase in levels of cellular drug efflux pumps such as P-glycoprotein (P-gp). Using cellular models, this thesis aimed to investigate resistance in lung cancer cells while developing siRNA and membrane proteomic techniques and to increase our knowledge of the effect of lapatinib, a newly developed targeted therapy, in these resistant cells. Lapatinib, a growth factor receptor tyrosine kinase inhibitor synergised with P-gp substrate cytotoxics in P-gp over-expressing resistant cells. However, lapatinib treatment, at clinically relevant concentrations, also increased levels of the P-gp drug transporter in a dose-responsive manner. Conversely, exposure to the epidermal growth factor (EGF), an endogenous growth factor receptor ligand, resulted in a decrease in P-gp expression. Using drug accumulation, efflux and toxicity assays we determined that alteration in P-gp levels by either lapatinib or EGF had little functional significance. P-gp is not the only resistance mechanism so siRNA-mediated gene silencing was exploited to investigate the role of additional proteins with potential roles in resistance. Firstly, P-gp knockdown by siRNA was coupled with toxicity and accumulation assays to determine the impact of silencing this protein in the chosen resistant lung cells. Additional putative targets were chosen from microarray data identifying genes associated with the development of paclitaxel resistance. Of the three genes investigated, ID3, CRYZ and CRIP1, ID3 emerged as having a potential role in contributing to resistance in one of the resistant lung carcinoma cell lines investigated. Many of the proteins important in resistance are membrane expressed but due to their size and hydrophobic nature, can be difficult to characterise. A 2D-LC-MS method was designed and employed to examine membrane proteins from the resistant lung cell models. Suitable parameters important in optimal identification of the proteins were determined. Large numbers of proteins were identified and comparisons made, highlighting those that were differentially expressed

    Big data analytics for preventive medicine

    Get PDF
    © 2019, Springer-Verlag London Ltd., part of Springer Nature. Medical data is one of the most rewarding and yet most complicated data to analyze. How can healthcare providers use modern data analytics tools and technologies to analyze and create value from complex data? Data analytics, with its promise to efficiently discover valuable pattern by analyzing large amount of unstructured, heterogeneous, non-standard and incomplete healthcare data. It does not only forecast but also helps in decision making and is increasingly noticed as breakthrough in ongoing advancement with the goal is to improve the quality of patient care and reduces the healthcare cost. The aim of this study is to provide a comprehensive and structured overview of extensive research on the advancement of data analytics methods for disease prevention. This review first introduces disease prevention and its challenges followed by traditional prevention methodologies. We summarize state-of-the-art data analytics algorithms used for classification of disease, clustering (unusually high incidence of a particular disease), anomalies detection (detection of disease) and association as well as their respective advantages, drawbacks and guidelines for selection of specific model followed by discussion on recent development and successful application of disease prevention methods. The article concludes with open research challenges and recommendations

    Molecular dissection of pericyte-to-neuron reprogramming reveals cellular identity safeguarding mechanisms

    Get PDF
    Neurodegenerative diseases, strokes, and injuries affect millions of people worldwide and current treatment options are insufficient. Since death of neurons in the brain is a common feature of all these disorders, a potential therapy could replace the lost neurons by newly generated ones to restore brain function. Natural adult neurogenesis in humans has been proven inadequate to deal with a major loss of brain cells. Therefore, for many years, transplantation of fetal tissue or stem cell-derived neural progenitors have been the focus of investigations regarding new treatments. More recently, new methods and insights have rendered brain-resident cells a promising means of an alternative therapeutic approach. While cellular identity was believed to be irreversible once differentiated for a long time, this view has changed gradually over the last decades. Among other cells, it has been shown for human brain pericytes that retroviral expression of the transcription factors (TFs) Ascl1 and Sox2 (AS) is sufficient to generate functional induced neurons (iNs) by direct reprogramming, and that this process is accompanied by a neural stem cell (NSC)-like state. While it is clear now that even a terminal cellular identity can be changed, the exact mechanisms remain elusive. Therefore, in this study we aimed at (i) identifying barriers and molecular mechanisms involved in cellular identity conversion from somatic cells into induced neurons, (ii) improving the efficiency of pericyte-to-neuron reprogramming, and (iii) directing the reprogramming process towards the desired cell types. By single cell RNA sequencing, we generated a high-resolution dataset of cells during pericyte-to-iN conversion. Using RNA velocity analysis, we were able to predict the progression of cells towards the neuronal fate and could identify blocker and facilitator genes that obstruct or enable cells to pass past a designated decision point. Among the facilitator genes, we identified several chromatin remodelers and cytoskeleton genes, and revealed a temporal heterogeneity regarding their expression pattern. Interestingly, we show that the blocker genes are part of a cellular identity safeguarding mechanism triggered by AS reprogramming. We demonstrate that the metabolic transition from glycolysis to oxidative phosphorylation is an essential barrier cells must overcome to transit from a pericyte towards a neuronal identity. Our findings suggest that any failure to meet metabolic requirements results in cells being either unable to change their identity or adopting a confused fate. To impact on the NSC-like state, we used either modulation of NOTCH signaling or TGF-ÎČ signaling by inhibition of the Îł-secretase or dual SMAD inhibition, respectively, via small molecules. Strikingly, both treatments counteracted pericyte identity safeguarding mechanisms and significantly lowered reprogramming barriers. Consequently, our results show a strong increase in the number of generated iNs. Interestingly, we demonstrate that TGF-ÎČ signaling inhibition is more potent in lowering these metabolic barriers than NOTCH signaling inhibition, re-routing cells onto an entirely different route towards neurons. Additionally, TGF-ÎČ signaling inhibition almost completely suppresses the generation of undesired off-target cells without a clear identity, likely due to antioxidant regulon activity, which supports the metabolic transition. Remarkably, we illustrate that despite different treatments, iNs are transcriptionally similar and that both neuronal subtypes can be mapped to developing human brain regions. Finally, we used a different approach and reprogrammed pericytes into TUBB3+ cells using Neurog2/Sox2 (NS). We show that NS generated cells have a distinct transcriptomic identity from AS generated ones: While they are more likely to lose their original identity, the NS-generated iNs exhibit more progenitor-like properties, pointing at the different reprogramming capacities of proneural TFs. Altogether, this thesis emphasizes not only that cellular identity even in terminally differentiated cells can still be altered without returning to a pluripotent state. It further illustrates several previously unknown mechanisms during direct pericyte-to-iN reprogramming and opens new ways to improve its efficiency. Every new insight into cross-lineage cellular identity conversion paves the way for future neuronal replacement therapies

    Advances in Data Mining Knowledge Discovery and Applications

    Get PDF
    Advances in Data Mining Knowledge Discovery and Applications aims to help data miners, researchers, scholars, and PhD students who wish to apply data mining techniques. The primary contribution of this book is highlighting frontier fields and implementations of the knowledge discovery and data mining. It seems to be same things are repeated again. But in general, same approach and techniques may help us in different fields and expertise areas. This book presents knowledge discovery and data mining applications in two different sections. As known that, data mining covers areas of statistics, machine learning, data management and databases, pattern recognition, artificial intelligence, and other areas. In this book, most of the areas are covered with different data mining applications. The eighteen chapters have been classified in two parts: Knowledge Discovery and Data Mining Applications

    DATA-DRIVEN ANALYSIS AND MAPPING OF THE POTENTIAL DISTRIBUTION OF MOUNTAIN PERMAFROST

    Get PDF
    In alpine environments, mountain permafrost is defined as a thermal state of the ground and it corresponds to any lithosphere material that is at or below 0°C for at least two years. Its degradation is potentially leading to an increasing rock fall activity and sediment transfer rates. During the last 20 years, knowledge on this phenomenon has significantly improved thanks to many studies and monitoring projects, revealing an extremely discontinuous and complex spatial distribution, especially at the micro scale (scale of a specific landform; tens to several hundreds of metres). The objective of this thesis was the systematic and detailed investigation of the potential of data-driven techniques for mountain permafrost distribution modelling. Machine learning (ML) algorithms are able to consider a greater number of pa- rameters compared to classic approaches. Not only can permafrost distribution be modelled by using topo-climatic parameters as a proxy, but also by taking into ac- count known field permafrost evidences. These latter were collected in a sector of the Western Swiss Alps and they were mapped from field data (thermal and geoelectrical data) and ortho-image interpretations (rock glacier inventorying). A permafrost dataset was built from these evidences and completed with environmental and mor- phological predictors. Data were firstly analysed with feature relevance techniques in order to identify the statistical contribution of each controlling factor and to exclude non-relevant or redundant predictors. Five classification algorithms, belonging to statistics and machine learning, were then applied to the dataset and tested: Logistic regression (LR), linear and non-linear Support Vector Machines (SVM), Multilayer perceptrons (MLP) and Random forests (RF). These techniques inferred a classifica- tion function from labelled training data (pixels of permafrost absence and presence) to predict the permafrost occurrence where this was unknown. Classification performances, assessed with AUROC curves, ranged between 0.75 (linear SVM) and 0.88 (RF). These values are generally indicative of good model performances. Besides these statistical measures, a qualitative evaluation was performed by using field expert knowledge. Both quantitative and qualitative evaluation approaches suggested to employ the RF algorithm to obtain the best model. As machine learning is a non-deterministic approach, an overview of the model uncertainties is also offered. It informs about the location of most uncertain sectors where further field investigations are required to be carried out to improve the reliability of permafrost maps. RF demonstrated to be efficient for permafrost distribution modelling thanks to consistent results that are comparable to the field observations. The employment of environmental variables illustrating the micro-topography and the ground charac- teristics (such as curvature indices, NDVI or grain size) favoured the prediction of the permafrost distribution at the micro scale. These maps presented variations of probability of permafrost occurrence within distances of few tens of metres. In some talus slopes, for example, a lower probability of occurrence in the mid-upper part of the slope was predicted. In addition, permafrost lower limits were automatically recognized from permafrost evidences. Lastly, the high resolution of the input dataset (10 metres) allowed elaborating maps at the micro scale with a modelled permafrost spatial distribution, which was less optimistic than traditional spatial models. The permafrost prediction was indeed computed without recurring to altitude thresh- olds (above which permafrost may be found) and the representation of the strong discontinuity of mountain permafrost at the micro scale was better respected. -- Dans les environnements alpins, le pergĂ©lisol de montagne est dĂ©fini comme un Ă©tat thermique du sol et correspond Ă  tout matĂ©riau de la lithosphĂšre qui maintient une tempĂ©rature Ă©gale ou infĂ©rieure Ă  0°C pendant au moins deux ans. Sa dĂ©gradation peut conduire Ă  une activitĂ© croissante de chutes de blocs et Ă  une augmentation des taux de transfert de sĂ©diments. Au cours des 20 derniĂšres annĂ©es, les connaissances sur ce phĂ©nomĂšne ont considĂ©rablement augmentĂ© grĂące Ă  de nombreuses Ă©tudes et projets de suivi, qui ont rĂ©vĂ©lĂ© une distribution spatiale extrĂȘmement discontinue et complexe du phĂ©nomĂšne, en particulier Ă  la micro-Ă©chelle (Ă©chelle d’une forme gĂ©omorphologique; dizaines Ă  plusieurs centaines de mĂštres). L’objectif de cette recherche Ă©tait l’étude systĂ©matique et dĂ©taillĂ©e des potentialitĂ©s offertes par une approche axĂ©e sur les donnĂ©es dans le cadre de la modĂ©lisation de la distribution du pergĂ©lisol de montagne. Les algorithmes d’apprentissage au- tomatique (machine learning) sont capables de considĂ©rer un plus grand nombre de variables que les approches classiques. La distribution du pergĂ©lisol peut ĂȘtre modĂ©lisĂ©e non seulement en utilisant des paramĂštres topo-climatiques (altitude, radiation solaire, etc.), mais aussi en tenant compte de la prĂ©sence et de l’absence connues du pergĂ©lisol (observations de terrain). CollectĂ©es dans un secteur des Alpes occidentales suisses, ces derniĂšres ont Ă©tĂ© cartographiĂ©es sur la base d’investigations de terrain (donnĂ©es thermiques et gĂ©oĂ©lectriques), d’interprĂ©tation d’orthophotos et d’inventaires de glaciers rocheux. Un jeu de donnĂ©es a Ă©tĂ© construit Ă  partir de ces Ă©vidences de terrain et complĂ©tĂ© par des prĂ©dicteurs environnementaux et morphologiques. Les donnĂ©es ont d’abord Ă©tĂ© analysĂ©es avec des techniques mon- trant la pertinence des variables permettant d’identifier la contribution statistique de chaque facteur de contrĂŽle et d’exclure les prĂ©dicteurs non pertinents ou redondants. Cinq algorithmes de classification appartenant aux domaines des statistiques et de l’apprentissage automatique ont ensuite Ă©tĂ© appliquĂ©s et testĂ©s : Logistic regression (LR), la version linĂ©aire et non-linĂ©aire de Support Vector Machines (SVM), Mul- tilayer perceptrons (MLP) et Random forests (RF). Ces techniques dĂ©duisent une fonction de classification Ă  partir des donnĂ©es dites d’entraĂźnement reprĂ©sentant l’absence et la prĂ©sence certaine du pergĂ©lisol. Elles permettent ensuite de prĂ©dire l’occurrence du phĂ©nomĂšne lĂ  oĂč elle est inconnue. Les performances de classification, Ă©valuĂ©es avec des courbes AUROC, variaient entre 0.75 (SVM linĂ©aire) et 0.88 (RF). Ces valeurs sont gĂ©nĂ©ralement indicatives de bonnes performances. En plus de ces mesures statistiques, une Ă©valuation qualitative a Ă©tĂ© rĂ©alisĂ©e et se base sur l’expertise gĂ©omorphologique. Les RF se sont rĂ©vĂ©lĂ©es ĂȘtre la technique produisant le meilleur modĂšle. Comme l’apprentissage automatique est une approche non dĂ©terministe, il a Ă©galement offert un aperçu des incertitudes de la modĂ©lisation, qui informent sur la localisation des secteurs les plus incertains dans lesquels des futures campagnes de terrain mĂ©ritent d’ĂȘtre menĂ©es afin d’amĂ©liorer la fiabilitĂ© des cartes produites. Finalement, RF ont dĂ©montrĂ© leur efficacitĂ© dans le cadre de la modĂ©lisation de la distribution du pergĂ©lisol grĂące Ă  des rĂ©sultats comparables aux observations de terrain. L’emploi de variables environnementales illustrant la micro-topographie du relief et les caractĂ©ristiques du sol (tels que les indices de courbure, le NDVI et la granulomĂ©trie) favorise la prĂ©diction de la distribution du pergĂ©lisol Ă  la micro- Ă©chelle, avec des cartes prĂ©sentant des variations de la probabilitĂ© d’occurrence du pergĂ©lisol sur des distances de quelques dizaines de mĂštres. Par exemple, dans cer- tains Ă©boulis, les cartes illustrent une probabilitĂ© plus faible dans la partie amont de la pente, ce qui s’avĂšre cohĂ©rent avec les observations de terrain. La limite infĂ©rieure du pergĂ©lisol a ainsi Ă©tĂ© automatiquement reconnue Ă  partir des Ă©vidences de terrain fournies Ă  l’algorithme. Enfin, la haute rĂ©solution du jeu de donnĂ©es (10 mĂštres) a permis d’élaborer des cartes prĂ©sentant une distribution spatiale du pergĂ©lisol moins optimiste que celle offerte par les modĂšles spatiaux classiques. La prĂ©diction du pergĂ©lisol a en effet Ă©tĂ© calculĂ©e sans utiliser des seuils d’altitude (au-dessus desquels on peut trouver du pergĂ©lisol) et respecte ainsi mieux la reprĂ©sentation de la forte discontinuitĂ© du pergĂ©lisol de montagne Ă  la micro-Ă©chelle. -- Negli ambienti alpini, il permafrost di montagna Ăš definito come uno stato termico del suolo e corrisponde a qualsiasi materiale nella litosfera che mantiene una temper- atura uguale o inferiore a 0° C per almeno due anni. La sua degradazione puĂČ portare ad una crescente attivitĂ  di caduta di blocchi e ad un aumento dei tassi di trasferi- mento dei sedimenti. Negli ultimi 20 anni, le conoscenze riguardanti il permafrost di montagna sono aumentate considerevolmente grazie ai numerosi studi e progetti di monitoraggio che hanno rivelato una distribuzione spaziale fortemente discontinua e complessa del fenomeno, in particolare alla scala della forma geomorfologica (definita come la micro scala, da decine a diverse centinaia di metri). L’obiettivo di questa ricerca Ă© lo studio sistematico e dettagliato delle potenzialitĂ  offerte da un approccio basato sui dati, nell’ottica di una modellizzazione della distribuzione del permafrost di montagna. Gli algoritmi di apprendimento auto- matico (machine learning) sono in grado di considerare piĂč variabili rispetto agli approcci classici. La distribuzione del permafrost puĂČ essere modellizzata non solo utilizzando i parametri topo-climatici classici (altitudine, radiazione solare, ecc.), ma anche considerando esempi di presenza e assenza del permafrost (osservazioni sul campo). Raccolti in un’area delle Alpi occidentali svizzere, questi ultimi sono stati mappati sulla base di indagini di terreno (dati termici e geoelettrici), interpretazione di ortofoto e inventari di ghiacciai rocciosi. A partire dalle evidenze di terreno, Ăš stato creato un set di dati, al quale sono stati integrati diversi predittori ambien- tali e morfologici. I dati sono stati dapprima analizzati con tecniche di indagine della rilevanza delle variabili; tali tecniche sono capaci di identificare il contributo statistico di ciascun fattore di controllo del permafrost e sono in grado di escludere i predittori non pertinenti o ridondanti. Sono stati, quindi, applicati e testati cinque al- goritmi di classificazione appartenenti ai campi della statistica e dell’apprendimento automatico: Logistic regression (LR), la versione lineare e non lineare di Support Vector Machines (SVM), Multilayer Perceptron (MLP) e Random forest (RF). Queste tecniche deducono una funzione di classificazione dai cosiddetti dati di allenamento, che rappresentano l’assenza e la presenza certa del permafrost, e permettono in seguito di predire il fenomeno laddove Ăš sconosciuto. Le prestazioni di classificazione, valutate con le curve AUROC, variavano da 0.75 (SVM lineare) a 0.88 (RF). Questi valori sono generalmente indicativi di buone prestazioni. Oltre a queste misure statistiche, Ăš stata effettuata una valutazione qualitativa. RF si Ă© rivelata essere la tecnica che produce il modello migliore. PoichĂ© l’apprendimento automatico Ăš un approccio non deterministico, Ă© stato possibile ottenere informazioni sulle incertezze della modellizzazione. Quest’ultime indicano in quali aree il modello Ă© piĂč incerto e, dunque, dove occorre pianificare nuove campagne di terreno per migliorare l’affidabilitĂ  delle mappe prodotte. RF ha dimostrato la sua efficacia nella modellizzazione della distribuzione del per- mafrost con risultati paragonabili alle osservazioni sul campo. L’uso di variabili ambientali che illustrano la topografia e le caratteristiche del suolo (come indici di curvatura, NDVI e granulometria) aiuta a predire la distribuzione del permafrost alla micro scala, con mappe che mostrano variazioni spaziali importanti della probabilitĂ  del permafrost su distanze di poche decine di metri. In alcune falde di detrito le mappe mostrano una probabilitĂ  inferiore nella parte a monte, risultato coerente con le osservazioni sul campo. Il limite inferiore del permafrost Ăš stato inoltre riconosci- uto automaticamente dagli esempi forniti all’algoritmo. Infine, l’alta risoluzione del set di dati (10 metri) ha permesso una simulazione della distribuzione spaziale del fenomeno meno ottimistica rispetto a quella fornita dai modelli classici. La previsione del permafrost Ăš stata, infatti, calcolata senza utilizzare delle soglie di altitudine e quindi rispetta meglio la rappresentazione dell’alta discontinuitĂ  del permafrost di montagna alla micro scala

    Machine Learning Approaches for Improving Prediction Performance of Structure-Activity Relationship Models

    Get PDF
    In silico bioactivity prediction studies are designed to complement in vivo and in vitro efforts to assess the activity and properties of small molecules. In silico methods such as Quantitative Structure-Activity/Property Relationship (QSAR) are used to correlate the structure of a molecule to its biological property in drug design and toxicological studies. In this body of work, I started with two in-depth reviews into the application of machine learning based approaches and feature reduction methods to QSAR, and then investigated solutions to three common challenges faced in machine learning based QSAR studies. First, to improve the prediction accuracy of learning from imbalanced data, Synthetic Minority Over-sampling Technique (SMOTE) and Edited Nearest Neighbor (ENN) algorithms combined with bagging as an ensemble strategy was evaluated. The Friedman’s aligned ranks test and the subsequent Bergmann-Hommel post hoc test showed that this method significantly outperformed other conventional methods. SMOTEENN with bagging became less effective when IR exceeded a certain threshold (e.g., \u3e40). The ability to separate the few active compounds from the vast amounts of inactive ones is of great importance in computational toxicology. Deep neural networks (DNN) and random forest (RF), representing deep and shallow learning algorithms, respectively, were chosen to carry out structure-activity relationship-based chemical toxicity prediction. Results suggest that DNN significantly outperformed RF (p \u3c 0.001, ANOVA) by 22-27% for four metrics (precision, recall, F-measure, and AUPRC) and by 11% for another (AUROC). Lastly, current features used for QSAR based machine learning are often very sparse and limited by the logic and mathematical processes used to compute them. Transformer embedding features (TEF) were developed as new continuous vector descriptors/features using the latent space embedding from a multi-head self-attention. The significance of TEF as new descriptors was evaluated by applying them to tasks such as predictive modeling, clustering, and similarity search. An accuracy of 84% on the Ames mutagenicity test indicates that these new features has a correlation to biological activity. Overall, the findings in this study can be applied to improve the performance of machine learning based Quantitative Structure-Activity/Property Relationship (QSAR) efforts for enhanced drug discovery and toxicology assessments

    Mechanical characterisation of Nb3Sn Rutherford cable stacks

    Get PDF
    Nb3Sn Rutherford cables are used in CERN’s superconducting 11 T dipole and MQXF quadrupole magnets, which are proposed for the instantaneous luminosity (rate of particle collisions) upgrade of the Large Hadron Collider (LHC) by a factor of ïŹve to a High Luminosity-Large Hadron Collider (HL-LHC). Nb3Sn-based conductors are the key technology for the envisioned Future Circular Collider (FCC) with an operating magnetic dipole ïŹeld of 16 T. The baseline superconductor of the LHC dipole magnets is Nb–Ti, whereas an operation above 10 T is not possible due to the current carrying performance limitations of this superconductor at higher magnetic ïŹelds. Therefore, a superconducting material such as Nb3Sn has to be used with proven performance capabilities of 10 T and above. The conductor choice towards Nb3Sn-based cables affects the magnet manufacturing process, as it requires a heat treatment up to 650°C, an epoxy resin impregnation and introduces mechanical diffculties as the superconducting ïŹlaments are brittle and strain sensitive. A mechanical over loading of the ïŹlaments lead to irreversible conductor damage. The designs of 11 and 16 T magnets are supposed to push the conductor towards its mechanical and electrical performance limitations. The magnetic ïŹeld induced forces on the current carrying conductor are balanced by mechanical pre-loading of the magnet. Thereby the highest controlled mechanical pre-load for the 11 T dipole magnet is set at ambient temperature. The mechanical stress limits of Nb3Sn-based cables have been investigated at cryogenic temperatures. The material strength and stiffness of the cable insulation system, formed by glass-ïŹbre-reinforced resin, is increased at low temperatures. The ultimate stress values, determined at cryogenic temperature, are therefore not conservative. The ultimate stress limitation of the insulated conductor is assumed to be lower at ambient temperature. The cable limitations at ambient temperature need to be known for the ongoing magnet manufacturing process and also for future design approaches. Furthermore, the compressive stress–strain behaviour of a coil conductor block at ambient temperature is the key material characteristic, in order to recalculate the stress level in the coil during the assembly process. Existing approaches using an indirect strain measurement method provide uncertainties in the low-strain regime, which is the essential strain range for a material compound consisting of major fractions composed of heat-annealed copper and epoxy resin. Compressive stress–strain data of an impregnated conductor block are required, based on a direct strain measurement system, as available data have been collected on samples based on a different strand type and insulation system. The elaborated direct strain measurements can be correlated to strain gauge data, measured directly on a coil. The stress distribution in a Nb3Sn Rutherford cable need to be understood and validated to understand strain-induced degradation effects in the insulated conductor. This knowledge can also help to optimise the stress distribution envisioned magnet designs. The stress–strain state in the copper and Nb3Sn phase of a loaded conductor block has to be determined experimentally. This dissertation describes a test protocol and ïŹrst elaborated results on the investigated stress limitations of a Nb3Sn Rutherford cable under homogeneous load applied in transversal direction. The compressive stress–strain behaviour of impregnated Nb3Sn Rutherford cable stacks was investigated experimentally. This includes a detailed report on the sample manufacturing process, measurements performed and validation of results through a comparison with the elaborated data of cable stacks extracted from a coil. The presented results from neutron diffraction measurements of loaded cable stacks allow the determination of the stress–strain level of the copper and Nb3Sn phase in the impregnated conductor. The relevant measured results have been recalculated with numerical calculations based on the Finite Element Method (FEM).:1. Introduction 1 1.1. The LHC and the HL-LHC project 1.2. The FCC study 1.3. Superconducting materials for accelerator magnets 1.4. Multi-ïŹlamentary wires and Rutherford cables 1.5. Coil manufacturing process 1.6. Magnet coil assembly 1.7. Objectives of this thesis 2. Theory: fundamental principles 17 2.1. Analytical calculation: sector coil dipole 2.2. Mechanical behaviour of composite materials 2.3. Failure criteria and strength hypotheses for materials 2.4. Compressive tests 2.5. Fundamental principles of Neutron scattering 2.5.1. Test apparatus and measurement method 2.5.2. Lattice plane and Miller indices 2.5.3. Bragg diffraction and interference 2.5.4. Diffraction-based strain calculation 2.5.5. Diffraction-based stress calculation 2.6. Fundamental principles of FEM 3. Homogeneous transversal compression of Nb3Sn Rutherford cables 3.1. Superconducting cable test stations 3.2. The FRESCA test facility and speciïŹc sample holder 3.3. The sample description 3.4. Experimental procedure 3.5. Review of existing contact pressure measurement system 3.6. Compressive test station 3.7. Validation of the pressure-sensitive ïŹlms 3.8. Press punch 3.9. Improvement of the contact stress distribution 3.9.1. First test: cable pressed between the bare tools 3.9.2. Second test: tool shimmed with a soft Sn96Ag4 3.9.3. Third test: tool shimmed with a soft Sn60Pb40 3.9.4. Fourth test: tool shimmed with a soft indium 3.9.5. Fifth test: tool shimmed with a polyimide ïŹlm 3.10. Test results 3.11. Conclusion 4. Material characterisation by a compression test 4.1. Test set-ups for compressive tests and validation 4.2. Sample preparation 4.3. Compressive stress–strain measurement 4.4. Ten-stack sample stiffness estimation-based composite theories 4.5. Dye penetration test on loaded and unloaded samples 4.6. Conclusion 5. Neutron diffraction measurements 80 5.1. Test set-up for neutron diffraction measurement 5.2. The samples 5.3. Experiment: lattice stress–strain measurements 5.4. Conclusion 6. Simulation and modelling of Nb3Sn cables 6.1. The models 6.2. The 2D simulation results 6.3. The 3D simulation results 6.4. Conclusion 7. Comprehensive summary 7.1. Summary 7.2. Critical review 7.3. Next steps Appendix 113 A. Calculation of the magnetic ïŹeld components in a sector coil without iron B. Approaches for the determination of diffraction elastic constants C. Manufacturing drawings D. FEM calculation results of the 2D model E. FEM calculation results of the 3D model F. Source Codes Bibliograph

    New Fundamental Technologies in Data Mining

    Get PDF
    The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining
    • 

    corecore