416 research outputs found

    Development and application of molecular and computational tools to image copper in cells

    Get PDF
    Copper is a trace element which is essential for many biological processes. A deficiency or excess of copper(I) ions, which is its main oxidation state of copper in cellular environment, is increasingly linked to the development of neurodegenerative diseases such as Parkinson’s and Alzheimer’s disease (PD and AD). The regulatory mechanisms for copper(I) are under active investigation and lysosomes which are best known as cellular “incinerators” have been found to play an important role in the trafficking of copper inside the cell. Therefore, it is important to develop reliable experimental methods to detect, monitor and visualise this metal in cells and to develop tools that allow to improve the data quality of microscopy recordings. This would enable the detailed exploration of cellular processes related to copper trafficking through lysosomes. The research presented in this thesis aimed to develop chemical and computational tools that can help to investigate concentration changes of copper(I) in cells (particularly in lysosomes), and it presents a preliminary case study that uses the here developed microscopy image quality enhancement tools to investigate lysosomal mobility changes upon treatment of cells with different PD or AD drugs. Chapter I first reports the synthesis of a previously reported copper(I) probe (CS3). The photophysical properties of this probe and functionality on different cell lines was tested and it was found that this copper(I) sensor predominantly localized in lipid droplets and that its photostability and quantum yield were insufficient to be applied for long term investigations of cellular copper trafficking. Therefore, based on the insights of this probe a new copper(I) selective fluorescent probe (FLCS1) was designed, synthesized, and characterized which showed superior photophysical properties (photostability, quantum yield) over CS3. The probe showed selectivity for copper(I) over other physiological relevant metals and showed strong colocalization in lysosomes in SH-SY5Y cells. This probe was then used to study and monitor lysosomal copper(I) levels via fluorescence lifetime imaging microscopy (FLIM); to the best of my knowledge this is the first copper(I) probe based on emission lifetime. Chapter II explores different computational deep learning approaches for improving the quality of recorded microscopy images. In total two existing networks were tested (fNET, CARE) and four new networks were implemented, tested, and benchmarked for their capabilities of improving the signal-to-noise ratio, upscaling the image size (GMFN, SRFBN-S, Zooming SlowMo) and interpolating image sequences (DAIN, Zooming SlowMo) in z- and t-dimension of multidimensional simulated and real-world datasets. The best performing networks of each category were then tested in combination by sequentially applying them on a low signal-to-noise ratio, low resolution, and low frame-rate image sequence. This image enhancement workstream for investigating lysosomal mobility was established. Additionally, the new frame interpolation networks were implemented in user-friendly Google Colab notebooks and were made publicly available to the scientific community on the ZeroCostDL4Mic platform. Chapter III provides a preliminary case study where the newly developed fluorescent copper(I) probe in combination with the computational enhancement algorithms was used to investigate the effects of five potential Parkinson’s disease drugs (rapamycin, digoxin, curcumin, trehalose, bafilomycin A1) on the mobility of lysosomes in live cells.Open Acces

    Towards A Computational Intelligence Framework in Steel Product Quality and Cost Control

    Get PDF
    Steel is a fundamental raw material for all industries. It can be widely used in vari-ous fields, including construction, bridges, ships, containers, medical devices and cars. However, the production process of iron and steel is very perplexing, which consists of four processes: ironmaking, steelmaking, continuous casting and rolling. It is also extremely complicated to control the quality of steel during the full manufacturing pro-cess. Therefore, the quality control of steel is considered as a huge challenge for the whole steel industry. This thesis studies the quality control, taking the case of Nanjing Iron and Steel Group, and then provides new approaches for quality analysis, manage-ment and control of the industry. At present, Nanjing Iron and Steel Group has established a quality management and control system, which oversees many systems involved in the steel manufacturing. It poses a high statistical requirement for business professionals, resulting in a limited use of the system. A lot of data of quality has been collected in each system. At present, all systems mainly pay attention to the processing and analysis of the data after the manufacturing process, and the quality problems of the products are mainly tested by sampling-experimental method. This method cannot detect product quality or predict in advance the hidden quality issues in a timely manner. In the quality control system, the responsibilities and functions of different information systems involved are intricate. Each information system is merely responsible for storing the data of its corresponding functions. Hence, the data in each information system is relatively isolated, forming a data island. The iron and steel production process belongs to the process industry. The data in multiple information systems can be combined to analyze and predict the quality of products in depth and provide an early warning alert. Therefore, it is necessary to introduce new product quality control methods in the steel industry. With the waves of industry 4.0 and intelligent manufacturing, intelligent technology has also been in-troduced in the field of quality control to improve the competitiveness of the iron and steel enterprises in the industry. Applying intelligent technology can generate accurate quality analysis and optimal prediction results based on the data distributed in the fac-tory and determine the online adjustment of the production process. This not only gives rise to the product quality control, but is also beneficial to in the reduction of product costs. Inspired from this, this paper provide in-depth discussion in three chapters: (1) For scrap steel to be used as raw material, how to use artificial intelligence algorithms to evaluate its quality grade is studied in chapter 3; (2) the probability that the longi-tudinal crack occurs on the surface of continuous casting slab is studied in chapter 4;(3) The prediction of mechanical properties of finished steel plate in chapter 5. All these 3 chapters will serve as the technical support of quality control in iron and steel production

    Convergence of Intelligent Data Acquisition and Advanced Computing Systems

    Get PDF
    This book is a collection of published articles from the Sensors Special Issue on "Convergence of Intelligent Data Acquisition and Advanced Computing Systems". It includes extended versions of the conference contributions from the 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2019), Metz, France, as well as external contributions

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p

    How to Rank Answers in Text Mining

    Get PDF
    In this thesis, we mainly focus on case studies about answers. We present the methodology CEW-DTW and assess its performance about ranking quality. Based on the CEW-DTW, we improve this methodology by combining Kullback-Leibler divergence with CEW-DTW, since Kullback-Leibler divergence can check the difference of probability distributions in two sequences. However, CEW-DTW and KL-CEW-DTW do not care about the effect of noise and keywords from the viewpoint of probability distribution. Therefore, we develop a new methodology, the General Entropy, to see how probabilities of noise and keywords affect answer qualities. We firstly analyze some properties of the General Entropy, such as the value range of the General Entropy. Especially, we try to find an objective goal, which can be regarded as a standard to assess answers. Therefore, we introduce the maximum general entropy. We try to use the general entropy methodology to find an imaginary answer with the maximum entropy from the mathematical viewpoint (though this answer may not exist). This answer can also be regarded as an “ideal” answer. By comparing maximum entropy probabilities and global probabilities of noise and keywords respectively, the maximum entropy probability of noise is smaller than the global probability of noise, maximum entropy probabilities of chosen keywords are larger than global probabilities of keywords in some conditions. This allows us to determinably select the max number of keywords. We also use Amazon dataset and a small group of survey to assess the general entropy. Though these developed methodologies can analyze answer qualities, they do not incorporate the inner connections among keywords and noise. Based on the Markov transition matrix, we develop the Jump Probability Entropy. We still adapt Amazon dataset to compare maximum jump entropy probabilities and global jump probabilities of noise and keywords respectively. Finally, we give steps about how to get answers from Amazon dataset, including obtaining original answers from Amazon dataset, removing stopping words and collinearity. We compare our developed methodologies to see if these methodologies are consistent. Also, we introduce Wald–Wolfowitz runs test and compare it with developed methodologies to verify their relationships. Depending on results of comparison, we get conclusions about consistence of these methodologies and illustrate future plans

    Flexible Automation and Intelligent Manufacturing: The Human-Data-Technology Nexus

    Get PDF
    This is an open access book. It gathers the first volume of the proceedings of the 31st edition of the International Conference on Flexible Automation and Intelligent Manufacturing, FAIM 2022, held on June 19 – 23, 2022, in Detroit, Michigan, USA. Covering four thematic areas including Manufacturing Processes, Machine Tools, Manufacturing Systems, and Enabling Technologies, it reports on advanced manufacturing processes, and innovative materials for 3D printing, applications of machine learning, artificial intelligence and mixed reality in various production sectors, as well as important issues in human-robot collaboration, including methods for improving safety. Contributions also cover strategies to improve quality control, supply chain management and training in the manufacturing industry, and methods supporting circular supply chain and sustainable manufacturing. All in all, this book provides academicians, engineers and professionals with extensive information on both scientific and industrial advances in the converging fields of manufacturing, production, and automation

    Archaeology or crime scene? Teeth micro and macro structure analysis as dating variable

    Full text link
    Simple methods to aid in the determination of forensic or archaeologic relevancy of skeletonized remains have been researched since the 1950s. With advances in microscopic imaging techniques and machine learning computer data analysis methods the relevancy of decontextualized, comingled remains has room for improvement. This thesis is a study done to pioneer a new approach to analyzing dental skeletal remains to determine forensic relevancy. Archaeological dental samples collected from the ancient city of Ur in modern day southern Iraq in addition to modern dental extractions were processed for scanning electron microscopy imaging. Archaeological and modern samples displayed different surface and dentinal tubule opening characteristics. The image files were then analyzed using a custom-built convolutional neural net model. The model’s performance metrics indicate that the model made better than random predictions based on learned associations. Thus, the use of scanning electron microscopy and machine learning analysis techniques has potential in distinguishing archaeological dental samples from modern dental samples

    Defect detection in infrared thermography by deep learning algorithms

    Get PDF
    L'évaluation non destructive (END) est un domaine permettant d'identifier tous les types de dommages structurels dans un objet d'intérêt sans appliquer de dommages et de modifications permanents. Ce domaine fait l'objet de recherches intensives depuis de nombreuses années. La thermographie infrarouge (IR) est l'une des technologies d'évaluation non destructive qui permet d'inspecter, de caractériser et d'analyser les défauts sur la base d'images infrarouges (séquences) provenant de l'enregistrement de l'émission et de la réflexion de la lumière infrarouge afin d'évaluer les objets non autochauffants pour le contrôle de la qualité et l'assurance de la sécurité. Ces dernières années, le domaine de l'apprentissage profond de l'intelligence artificielle a fait des progrès remarquables dans les applications de traitement d'images. Ce domaine a montré sa capacité à surmonter la plupart des inconvénients des autres approches existantes auparavant dans un grand nombre d'applications. Cependant, en raison de l'insuffisance des données d'entraînement, les algorithmes d'apprentissage profond restent encore inexplorés, et seules quelques publications font état de leur application à l'évaluation non destructive de la thermographie (TNDE). Les algorithmes d'apprentissage profond intelligents et hautement automatisés pourraient être couplés à la thermographie infrarouge pour identifier les défauts (dommages) dans les composites, l'acier, etc. avec une confiance et une précision élevée. Parmi les sujets du domaine de recherche TNDE, les techniques d'apprentissage automatique supervisées et non supervisées sont les tâches les plus innovantes et les plus difficiles pour l'analyse de la détection des défauts. Dans ce projet, nous construisons des cadres intégrés pour le traitement des données brutes de la thermographie infrarouge à l'aide d'algorithmes d'apprentissage profond et les points forts des méthodologies proposées sont les suivants: 1. Identification et segmentation automatique des défauts par des algorithmes d'apprentissage profond en thermographie infrarouge. Les réseaux neuronaux convolutifs (CNN) pré-entraînés sont introduits pour capturer les caractéristiques des défauts dans les images thermiques infrarouges afin de mettre en œuvre des modèles basés sur les CNN pour la détection des défauts structurels dans les échantillons composés de matériaux composites (diagnostic des défauts). Plusieurs alternatives de CNNs profonds pour la détection de défauts dans la thermographie infrarouge. Les comparaisons de performance de la détection et de la segmentation automatique des défauts dans la thermographie infrarouge en utilisant différentes méthodes de détection par apprentissage profond : (i) segmentation d'instance (Center-mask ; Mask-RCNN) ; (ii) détection d’objet (Yolo-v3 ; Faster-RCNN) ; (iii) segmentation sémantique (Unet ; Res-unet); 2. Technique d'augmentation des données par la génération de données synthétiques pour réduire le coût des dépenses élevées associées à la collecte de données infrarouges originales dans les composites (composants d'aéronefs.) afin d'enrichir les données de formation pour l'apprentissage des caractéristiques dans TNDE; 3. Le réseau antagoniste génératif (GAN convolutif profond et GAN de Wasserstein) est introduit dans la thermographie infrarouge associée à la thermographie partielle des moindres carrés (PLST) (réseau PLS-GANs) pour l'extraction des caractéristiques visibles des défauts et l'amélioration de la visibilité des défauts pour éliminer le bruit dans la thermographie pulsée; 4. Estimation automatique de la profondeur des défauts (question de la caractérisation) à partir de données infrarouges simulées en utilisant un réseau neuronal récurrent simplifié : Gate Recurrent Unit (GRU) à travers l'apprentissage supervisé par régression.Non-destructive evaluation (NDE) is a field to identify all types of structural damage in an object of interest without applying any permanent damage and modification. This field has been intensively investigated for many years. The infrared thermography (IR) is one of NDE technology through inspecting, characterize and analyzing defects based on the infrared images (sequences) from the recordation of infrared light emission and reflection to evaluate non-self-heating objects for quality control and safety assurance. In recent years, the deep learning field of artificial intelligence has made remarkable progress in image processing applications. This field has shown its ability to overcome most of the disadvantages in other approaches existing previously in a great number of applications. Whereas due to the insufficient training data, deep learning algorithms still remain unexplored, and only few publications involving the application of it for thermography nondestructive evaluation (TNDE). The intelligent and highly automated deep learning algorithms could be coupled with infrared thermography to identify the defect (damages) in composites, steel, etc. with high confidence and accuracy. Among the topics in the TNDE research field, the supervised and unsupervised machine learning techniques both are the most innovative and challenging tasks for defect detection analysis. In this project, we construct integrated frameworks for processing raw data from infrared thermography using deep learning algorithms and highlight of the methodologies proposed include the following: 1. Automatic defect identification and segmentation by deep learning algorithms in infrared thermography. The pre-trained convolutional neural networks (CNNs) are introduced to capture defect feature in infrared thermal images to implement CNNs based models for the detection of structural defects in samples made of composite materials (fault diagnosis). Several alternatives of deep CNNs for the detection of defects in the Infrared thermography. The comparisons of performance of the automatic defect detection and segmentation in infrared thermography using different deep learning detection methods: (i) instance segmentation (Center-mask; Mask-RCNN); (ii) objective location (Yolo-v3; Faster-RCNN); (iii) semantic segmentation (Unet; Res-unet); 2. Data augmentation technique through synthetic data generation to reduce the cost of high expense associated with the collection of original infrared data in the composites (aircraft components.) to enrich training data for feature learning in TNDE; 3. The generative adversarial network (Deep convolutional GAN and Wasserstein GAN) is introduced to the infrared thermography associated with partial least square thermography (PLST) (PLS-GANs network) for visible feature extraction of defects and enhancement of the visibility of defects to remove noise in Pulsed thermography; 4. Automatic defect depth estimation (Characterization issue) from simulated infrared data using a simplified recurrent neural network: Gate Recurrent Unit (GRU) through the regression supervised learning
    corecore