10 research outputs found

    Artificial Intelligence in Material Engineering: A review on applications of AI in Material Engineering

    Full text link
    Recently, there has been extensive use of artificial Intelligence (AI) in the field of material engineering. This can be attributed to the development of high performance computing and thereby feasibility to test deep learning models with large parameters. In this article we tried to review some of the latest developments in the applications of AI in material engineering.Comment: V

    ADIC: Anomaly Detection Integrated Circuit in 65nm CMOS utilizing Approximate Computing

    Full text link
    In this paper, we present a low-power anomaly detection integrated circuit (ADIC) based on a one-class classifier (OCC) neural network. The ADIC achieves low-power operation through a combination of (a) careful choice of algorithm for online learning and (b) approximate computing techniques to lower average energy. In particular, online pseudoinverse update method (OPIUM) is used to train a randomized neural network for quick and resource efficient learning. An additional 42% energy saving can be achieved when a lighter version of OPIUM method is used for training with the same number of data samples lead to no significant compromise on the quality of inference. Instead of a single classifier with large number of neurons, an ensemble of K base learner approach is chosen to reduce learning memory by a factor of K. This also enables approximate computing by dynamically varying the neural network size based on anomaly detection. Fabricated in 65nm CMOS, the ADIC has K = 7 Base Learners (BL) with 32 neurons in each BL and dissipates 11.87pJ/OP and 3.35pJ/OP during learning and inference respectively at Vdd = 0.75V when all 7 BLs are enabled. Further, evaluated on the NASA bearing dataset, approximately 80% of the chip can be shut down for 99% of the lifetime leading to an energy efficiency of 0.48pJ/OP, an 18.5 times reduction over full-precision computing running at Vdd = 1.2V throughout the lifetime.Comment: 1

    Interpretable machine learning model to predict survival days of malignant brain tumor patients

    No full text
    An artificial intelligence (AI) model’s performance is strongly influenced by the input features. Therefore, it is vital to find the optimal feature set. It is more crucial for the survival prediction of the glioblastoma multiforme (GBM) type of brain tumor. In this study, we identify the best feature set for predicting the survival days (SD) of GBM patients that outrank the current state-of-the-art methodologies. The proposed approach is an end-to-end AI model. This model first segments tumors from healthy brain parts in patients’ MRI images, extracts features from the segmented results, performs feature selection, and makes predictions about patients’ survival days (SD) based on selected features. The extracted features are primarily shape-based, location-based, and radiomics-based features. Additionally, patient metadata is also included as a feature. The selection methods include recursive feature elimination, permutation importance (PI), and finding the correlation between the features. Finally, we examined features’ behavior at local (single sample) and global (all the samples) levels. In this study, we find that out of 1265 extracted features, only 29 dominant features play a crucial role in predicting patients’ SD. Among these 29 features, one is metadata (age of patient), three are location-based, and the rest are radiomics features. Furthermore, we find explanations of these features using post-hoc interpretability methods to validate the model’s robust prediction and understand its decision. Finally, we analyzed the behavioral impact of the top six features on survival prediction, and the findings drawn from the explanations were coherent with the medical domain. We find that after the age of 50 years, the likelihood of survival of a patient deteriorates, and survival after 80 years is scarce. Again, for location-based features, the SD is less if the tumor location is in the central or back part of the brain. All these trends derived from the developed AI model are in sync with medically proven facts. The results show an overall 33% improvement in the accuracy of SD prediction compared to the top-performing methods of the BraTS-2020 challenge

    Live demonstration : autoencoder-based predictive maintenance for IoT

    No full text
    This live demo aims to show the performance of a two-layer neural network applied to predictive maintenance. The first layer encodes features based on prior knowledge, while the second layer is trained online to detect anomalies. The system is implemented on an FPGA, acquiring real-time data from sensors attached to a motor. Faults can be triggered artificially in real-time to demonstrate anomaly detection.NRF (Natl Research Foundation, S’pore)Accepted versio

    A stacked autoencoder neural network based automated feature extraction method for anomaly detection in on-line condition monitoring

    No full text
    Condition monitoring is one of the routine tasks in all major process industries. The mechanical parts such as a motor, gear, bearing are the major components of a process industry and any fault in them may cause a total shutdown of the whole process, which may result in serious losses. Therefore it is very crucial to predict any approaching defects before its occurrence. Several methods exist for this purpose and many research are being carried out for better and efficient models. However, most of them are based on the processing of raw sensor signals, which is tedious and expensive. Recently, there has been an increase in the feature based condition monitoring, where only the useful features are extracted from the raw signals and interpreted for the prediction of the fault. Most of these are handcrafted features, where these are manually obtained based on the nature of the raw data. This of course requires the prior knowledge of the nature of data and related processes. This limits the feature extraction process. However, recent development in the autoencoder based feature extraction method provides an alternative to the traditional handcrafted approaches; however, they have mostly been confined in the area of image and audio processing. In this work, we have developed an automated feature extraction method for on-line condition monitoring based on the stack of the traditional autoencoder and an on-line sequential extreme learning machine (OSELM) network. The performance of this method is comparable to that of the traditional feature extraction approaches. The method can achieve 100% detection accuracy for determining the bearing health states of NASA bearing dataset. The simple design of this method is promising for the easy hardware implementation of Internet of Things (IoT) based prognostics solutions.NRF (Natl Research Foundation, S’pore)Accepted versio

    Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    Get PDF
    Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al.), we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings

    Machine Learning Based Lens-Free Shadow Imaging Technique for Field-Portable Cytometry

    No full text
    The lens-free shadow imaging technique (LSIT) is a well-established technique for the characterization of microparticles and biological cells. Due to its simplicity and cost-effectiveness, various low-cost solutions have been developed, such as automatic analysis of complete blood count (CBC), cell viability, 2D cell morphology, 3D cell tomography, etc. The developed auto characterization algorithm so far for this custom-developed LSIT cytometer was based on the handcrafted features of the cell diffraction patterns from the LSIT cytometer, that were determined from our empirical findings on thousands of samples of individual cell types, which limit the system in terms of induction of a new cell type for auto classification or characterization. Further, its performance suffers from poor image (cell diffraction pattern) signatures due to their small signal or background noise. In this work, we address these issues by leveraging the artificial intelligence-powered auto signal enhancing scheme such as denoising autoencoder and adaptive cell characterization technique based on the transfer of learning in deep neural networks. The performance of our proposed method shows an increase in accuracy >98% along with the signal enhancement of >5 dB for most of the cell types, such as red blood cell (RBC) and white blood cell (WBC). Furthermore, the model is adaptive to learn new type of samples within a few learning iterations and able to successfully classify the newly introduced sample along with the existing other sample types

    Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    No full text
    Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al.), we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings

    ADEPOS : a novel approximate computing framework for anomaly detection systems and its implementation in 65-nm CMOS

    No full text
    To overcome the energy and bandwidth limitations of traditional IoT systems, 'edge computing' or information extraction at the sensor node has become popular. However, now it is important to create very low energy information extraction or pattern recognition systems. In this paper, we present an approximate computing method to reduce the computation energy of a specific type of IoT system used for anomaly detection (e.g. in predictive maintenance, epileptic seizure detection, etc). Termed as Anomaly Detection Based Power Savings (ADEPOS), our proposed method uses low precision computing and low complexity neural networks at the beginning when it is easy to distinguish healthy data. However, on the detection of anomalies, the complexity of the network and computing precision are adaptively increased for accurate predictions. We show that ensemble approaches are well suited for adaptively changing network size. To validate our proposed scheme, a chip has been fabricated in UMC 65nm process that includes an MSP430 microprocessor along with an on-chip switching mode DC-DC converter for dynamic voltage and frequency scaling. Using NASA bearing dataset for machine health monitoring, we show that using ADEPOS we can achieve 8.95X saving of energy along the lifetime without losing any detection accuracy. The energy savings are obtained by reducing the execution time of the neural network on the microprocessor.National Research Foundation (NRF)This work was supported in part by Delta Electronics, Inc., and in part by the National Research Foundation Singapore under the Corp Lab@University scheme
    corecore