106 research outputs found

    Modeling and design of heterogeneous hierarchical bioinspired spider web structures using generative deep learning and additive manufacturing

    Full text link
    Spider webs are incredible biological structures, comprising thin but strong silk filament and arranged into complex hierarchical architectures with striking mechanical properties (e.g., lightweight but high strength, achieving diverse mechanical responses). While simple 2D orb webs can easily be mimicked, the modeling and synthesis of 3D-based web structures remain challenging, partly due to the rich set of design features. Here we provide a detailed analysis of the heterogenous graph structures of spider webs, and use deep learning as a way to model and then synthesize artificial, bio-inspired 3D web structures. The generative AI models are conditioned based on key geometric parameters (including average edge length, number of nodes, average node degree, and others). To identify graph construction principles, we use inductive representation sampling of large experimentally determined spider web graphs, to yield a dataset that is used to train three conditional generative models: 1) An analog diffusion model inspired by nonequilibrium thermodynamics, with sparse neighbor representation, 2) a discrete diffusion model with full neighbor representation, and 3) an autoregressive transformer architecture with full neighbor representation. All three models are scalable, produce complex, de novo bio-inspired spider web mimics, and successfully construct graphs that meet the design objectives. We further propose algorithm that assembles web samples produced by the generative models into larger-scale structures based on a series of geometric design targets, including helical and parametric shapes, mimicking, and extending natural design principles towards integration with diverging engineering objectives. Several webs are manufactured using 3D printing and tested to assess mechanical properties

    Deep learning for accelerated magnetic resonance imaging

    Get PDF
    Medical imaging has aided the biggest advance in the medical domain in the last century. Whilst X-ray, CT, PET and ultrasound are a form of imaging that can be useful in particular scenarios, they each have disadvantages in cost, image quality, ease-of-use and ionising radiation. MRI is a slow imaging protocol which contributes to its high cost to run. However, MRI is a very versatile imaging protocol allowing images of varying contrast to be easily generated whilst not requiring the use of ionising radiation. If MRI can be made to be more efficient and smart, the effective cost of running MRI may be more affordable and accessible. The focus of this thesis is decreasing the acquisition time involved in MRI whilst maintaining the quality of the generated images and thus diagnosis. In particular, we focus on data-driven deep learning approaches that aid in the image reconstruction process and streamline the diagnostic process. We focus on three particular aspects of MR acquisition. Firstly, we investigate the use of motion estimation in the cine reconstruction process. Motion allows us to combine an abundance of imaging data in a learnt reconstruction model allowing acquisitions to be sped up by up to 50 times in extreme scenarios. Secondly, we investigate the possibility of using under-acquired MR data to generate smart diagnoses in the form of automated text reports. In particular, we investigate the possibility of skipping the imaging reconstruction phase altogether at inference time and instead, directly seek to generate radiological text reports for diffusion-weighted brain images in an effort to streamline the diagnostic process. Finally, we investigate the use of probabilistic modelling for MRI reconstruction without the use of fully-acquired data. In particular, we note that acquiring fully-acquired reference images in MRI can be difficult and nonetheless may still contain undesired artefacts that lead to degradation of the dataset and thus the training process. In this chapter, we investigate the possibility of performing reconstruction without fully-acquired references and furthermore discuss the possibility of generating higher quality outputs than that of the fully-acquired references.Open Acces

    Deep Learning and parallelization of Meta-heuristic Methods for IoT Cloud

    Get PDF
    Healthcare 4.0 is one of the Fourth Industrial Revolution’s outcomes that make a big revolution in the medical field. Healthcare 4.0 came with more facilities advantages that improved the average life expectancy and reduced population mortality. This paradigm depends on intelligent medical devices (wearable devices, sensors), which are supposed to generate a massive amount of data that need to be analyzed and treated with appropriate data-driven algorithms powered by Artificial Intelligence such as machine learning and deep learning (DL). However, one of the most significant limits of DL techniques is the long time required for the training process. Meanwhile, the realtime application of DL techniques, especially in sensitive domains such as healthcare, is still an open question that needs to be treated. On the other hand, meta-heuristic achieved good results in optimizing machine learning models. The Internet of Things (IoT) integrates billions of smart devices that can communicate with one another with minimal human intervention. IoT technologies are crucial in enhancing several real-life smart applications that can improve life quality. Cloud Computing has emerged as a key enabler for IoT applications because it provides scalable and on-demand, anytime, anywhere access to the computing resources. In this thesis, we are interested in improving the efficacity and performance of Computer-aided diagnosis systems in the medical field by decreasing the complexity of the model and increasing the quality of data. To accomplish this, three contributions have been proposed. First, we proposed a computer aid diagnosis system for neonatal seizures detection using metaheuristics and convolutional neural network (CNN) model to enhance the system’s performance by optimizing the CNN model. Secondly, we focused our interest on the covid-19 pandemic and proposed a computer-aided diagnosis system for its detection. In this contribution, we investigate Marine Predator Algorithm to optimize the configuration of the CNN model that will improve the system’s performance. In the third contribution, we aimed to improve the performance of the computer aid diagnosis system for covid-19. This contribution aims to discover the power of optimizing the data using different AI methods such as Principal Component Analysis (PCA), Discrete wavelet transform (DWT), and Teager Kaiser Energy Operator (TKEO). The proposed methods and the obtained results were validated with comparative studies using benchmark and public medical data

    Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks

    Full text link
    Throughout history, the development of artificial intelligence, particularly artificial neural networks, has been open to and constantly inspired by the increasingly deepened understanding of the brain, such as the inspiration of neocognitron, which is the pioneering work of convolutional neural networks. Per the motives of the emerging field: NeuroAI, a great amount of neuroscience knowledge can help catalyze the next generation of AI by endowing a network with more powerful capabilities. As we know, the human brain has numerous morphologically and functionally different neurons, while artificial neural networks are almost exclusively built on a single neuron type. In the human brain, neuronal diversity is an enabling factor for all kinds of biological intelligent behaviors. Since an artificial network is a miniature of the human brain, introducing neuronal diversity should be valuable in terms of addressing those essential problems of artificial networks such as efficiency, interpretability, and memory. In this Primer, we first discuss the preliminaries of biological neuronal diversity and the characteristics of information transmission and processing in a biological neuron. Then, we review studies of designing new neurons for artificial networks. Next, we discuss what gains can neuronal diversity bring into artificial networks and exemplary applications in several important fields. Lastly, we discuss the challenges and future directions of neuronal diversity to explore the potential of NeuroAI

    Deteção automática de defeitos em couro

    Get PDF
    Dissertação de mestrado em Informatics EngineeringEsta dissertação desenvolve-se em torno do problema da deteção de defeitos em couro. A deteção de defeitos em couro é um problema tradicionalmente resolvido manualmente, usando avaliadores ex perientes na inspeção do couro. No entanto, como esta tarefa é lenta e suscetível ao erro humano, ao longo dos últimos 20 anos tem-se procurado soluções que automatizem a tarefa. Assim, surgiram várias soluções capazes de resolver o problema eficazmente utilizando técnicas de Machine Learning e Visão por Computador. No entanto, todas elas requerem um conjunto de dados de grande dimensão anotado e balanceado entre as várias categorias. Assim, esta dissertação pretende automatizar o processo tradicio nal, usando técnicas de Machine Learning, mas sem recorrer a datasets anotados de grandes dimensões. Para tal, são exploradas técnicas de Novelty Detection, as quais permitem resolver a tarefa de inspeção de defeitos utilizando um conjunto de dados não supervsionado, pequeno e não balanceado. Nesta dis sertação foram analisadas e testadas as seguintes técnicas de novelty detection: MSE Autoencoder, SSIM Autoencoder, CFLOW, STFPM, Reverse, and DRAEM. Estas técnicas foram treinadas e testadas com dois conjuntos de dados diferentes: MVTEC e Neadvance. As técnicas analisadas detectam e localizam a mai oria dos defeitos das imagens do MVTEC. Contudo, têm dificuldades em detetar os defeitos das imagens do dataset da Neadvance. Com base nos resultados obtidos, é proposta a melhor metodologia a usar para três diferentes cenários. No caso do poder computacional ser baixo, SSIM Autoencoder deve ser a técnica usada. No caso onde há poder computational suficiente e os exemplos a analisar são de uma só cor, DRAEM deve ser a técnica escolhida. Em qualquer outro caso, o STFPM deve ser a opção escolhida.This dissertation develops around the leather defects detection problem. The leather defects detec tion problem is traditionally manually solved, using experient assorters in the leather inspection. However, as this task is slow and prone to human error, over the last 20 years the searching for solutions that automatize this task has continued. In this way, several solutions capable to solve the problem effi ciently emerged using Machine Learning and Computer Vision techniques. Nonetheless, they all require a high-dimension dataset labeled and balanced between all categories. Thus, this dissertation pretends to automatize the traditional process, using the Machine Learning techniques without requiring a large dimensions labelled dataset. To this end, there will be explored Novelty Detection techniques, that in tend to solve the leather inspection task using an unsupervised small and non-balanced dataset. This dissertation analyzed and tested the following Novelty Detection techniques: MSE Autoencoder, SSIM Autoencoder, CFLOW, STFPM, Reverse, and DRAEM. These techniques are trained and tested in two distinct datasets: MVTEC and Neadvance. The analyzed techniques detect and localize most MVTEC defects. However, they have difficulties in defect detection on Neadvance samples. Based on the ob tained results, it is proposed the best methodology to use for three distinct scenarios. In the case where the computational power available is low, SSIM Autoencoder should be the technique to use. In the case where there is enough computational power and the samples to inspect have the same color, DRAEM should be the chosen technique. In any other case, the STFPM should be the chosen option

    Modelling Non-Equilibrium Molecular Formation and Dissociation for the Spectroscopic Analysis of Cool Stellar Atmospheres

    Get PDF
    Modelling techniques for stellar atmospheres are undergoing continuous improvement. In this thesis, I showcase how these methods are used for spectroscopic analysis and for modelling time-dependent molecular formation and dissociation. I first use CO5BOLD model atmospheres with the LINFOR3D spectrum synthesis code to determine the photospheric solar silicon abundance of 7.57 ± 0.04. This work also revealed some issues present in the cutting-edge methods, such as synthesised lines being overly broadened. Next, I constructed a chemical reaction network in order to model the time-dependent evolution of molecular species in (carbon-enhanced) metal-poor dwarf and red giant atmospheres, again using CO5 BOLD. This was to test if the assumption of chemical equi librium, widely assumed in spectroscopic studies, was still vaild in the photospheres of metal-poor stars. Indeed, the mean deviations from chemical equilibrium are below 0.2 dex across the spectroscopically relevant regions of the atmosphere, though deviations increase with height. Finally, I implemented machine learning methods in order to remove noise and line blends from spectra, as well as to predict the equilibrium state of a chemical reaction network. The methods used and developed in this thesis illustrate the importance of both conventional and machine learning modelling techniques, and merge them to further improve accuracy, precision, and efficiency

    Decoding Neural Signals with Computational Models: A Systematic Review of Invasive BMI

    Full text link
    There are significant milestones in modern human's civilization in which mankind stepped into a different level of life with a new spectrum of possibilities and comfort. From fire-lighting technology and wheeled wagons to writing, electricity and the Internet, each one changed our lives dramatically. In this paper, we take a deep look into the invasive Brain Machine Interface (BMI), an ambitious and cutting-edge technology which has the potential to be another important milestone in human civilization. Not only beneficial for patients with severe medical conditions, the invasive BMI technology can significantly impact different technologies and almost every aspect of human's life. We review the biological and engineering concepts that underpin the implementation of BMI applications. There are various essential techniques that are necessary for making invasive BMI applications a reality. We review these through providing an analysis of (i) possible applications of invasive BMI technology, (ii) the methods and devices for detecting and decoding brain signals, as well as (iii) possible options for stimulating signals into human's brain. Finally, we discuss the challenges and opportunities of invasive BMI for further development in the area.Comment: 51 pages, 14 figures, review articl
    • …
    corecore