16 research outputs found

    Induction of apoptosis and inhibition of cell growth by tbx5 knockdown contribute to dysmorphogenesis in Zebrafish embryos

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The tbx5 mutation in human causes Holt-Oram syndrome, an autosomal dominant condition characterized by a familial history of congenital heart defects and preaxial radial upper-limb defects. We report aberrant apoptosis and dormant cell growth over head, heart, trunk, fin, and tail of zebrafish embryos with tbx5 deficiency correspond to the dysmorphogenesis of tbx5 morphants.</p> <p>Methods</p> <p>Wild-type zebrafish embryos at the 1-cell stage were injected with 4.3 nl of 19.4 ng of tbx5 morpholino or mismatch-tbx5-MO respectively in tbx5 morphants and mismatched control group. Semi-quantitative RT-PCR was used to for expression analysis of apoptosis and cell cycle-related genes. TUNEL and immunohistochemical assay showed the apoptosis spots within the local tissues. Ultra-structure of cardiac myocardium was examined by transmission electron microscope.</p> <p>Results</p> <p>Apoptosis-related genes (bad, bax, and bcl2), and cell cycle-related genes (cdk2, pcna, p27, and p57) showed remarkable increases in transcriptional level by RT-PCR. Using a TUNEL and immnuohistochemical assay, apoptosis was observed in the organs including the head, heart, pectoral fins, trunk, and tail of tbx5 knockdown embryos. Under transmission electron microscopic examination, mitochondria in cardiomyocytes became swollen and the myocardium was largely disorganized with a disarrayed appearance, compatible with reduced enhancement of myosin in the cardiac wall. The ATP level was reduced, and the ADP/ATP ratio as an apoptotic index significantly increased in the tbx5 deficient embryos.</p> <p>Conclusion</p> <p>Our study highlighted that tbx5 deficiency evoked apoptosis, distributed on multiple organs corresponding to dysmorphogenesis with the shortage of promising maturation, in tbx5 knockdown zebrafish embryos. We hypothesized that mesenchymal cell apoptosis associated with altered TBX5 level may subsequently interfered with organogenesis and contributed to dysmorphogenesis in tbx5 deficiency zebrafish embryos.</p

    Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

    Full text link
    Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. The AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, that provides the benefits of using the AIHWKit simulation platform in a fully managed cloud setting. Finally, we show examples on how users can expand and customize AIHWKit for their own needs. This tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial

    Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

    Full text link
    Analog in-memory computing (AIMC) -- a promising approach for energy-efficient acceleration of deep learning workloads -- computes matrix-vector multiplications (MVMs) but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable deep neural network (DNN) inference accuracy as compared to a conventional floating point (FP) implementation. While retraining has previously been suggested to improve robustness, prior work has explored only a few DNN topologies, using disparate and overly simplified AIMC hardware models. Here, we use hardware-aware (HWA) training to systematically examine the accuracy of AIMC for multiple common artificial intelligence (AI) workloads across multiple DNN topologies, and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a new and highly realistic AIMC crossbar-model, we improve significantly on earlier retraining approaches. We show that many large-scale DNNs of various topologies, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers, can in fact be successfully retrained to show iso-accuracy on AIMC. Our results further suggest that AIMC nonidealities that add noise to the inputs or outputs, not the weights, have the largest impact on DNN accuracy, and that RNNs are particularly robust to all nonidealities.Comment: 35 pages, 7 figures, 5 table

    Impact of analog memory device failure on in-memory computing inference accuracy

    No full text
    In-memory computing using analog non-volatile memory (NVM) devices can improve the speed and reduce the latency of deep neural network (DNN) inference. It has been recently shown that neuromorphic crossbar arrays, where each weight is implemented using analog conductance values of phase-change memory devices, achieve competitive accuracy and high power efficiency. However, due to the large amount of NVMs needed and the challenge for making analog NVM devices, these chips typically include some failed devices from fabrication or developed over time. We study the impact of these failed devices on the analog in-memory computing accuracy for various networks. We show that larger networks with fewer reused layers are more tolerable to failed devices. Devices stuck at high resistance states are more tolerable than devices stuck at low resistance states. To improve the robustness of DNNs to defective devices, we develop training methods that add noise and corrupt devices in the weight matrices during network training and show that this can increase the network accuracy in the presence of the failed devices. We also provide estimated maximum defective device tolerance of some common networks

    Pectoral Fin Anomalies in tbx5a Knockdown Zebrafish Embryos Related to the Cascade Effect of N-Cadherin and Extracellular Matrix Formation

    No full text
    Functional knockdown of zebrafish tbx5a causes hypoplasia or aplasia of pectoral fins. This study aimed to assess developmental pectoral fin anomalies in tbx5a morpholino knockdown zebrafish embryos. The expression of cartilage-related genes in the tbx5a morphant was analyzed by DNA microarray, immunostaining, and thin-section histology to examine the detailed distribution of the extracellular matrix (ECM) during different pectoral fin developmental stages. Chondrogenic condensation (CC) in the tbx5a morpholino knockdown group was barely recognizable at 37 h postfertilization (hpf); the process from CC to endoskeleton formation was disrupted at 48 hpf, and the endoskeleton was only loosely formed at 72 hpf. Microarrays identified 18 downregulated genes in tbx5a-deficient embryos, including 2 fin morphogenesis-related (cx43, bbs7), 4 fin development-related (hoxc8a, hhip, axin1, msxb), and 12 cartilage development-related (mmp14a, sec23b, tfap2a, slc35b2, dlx5a, dlx1a, tfap2b, fmr1, runx3, cdh2, lect1, acvr2a, mmp14b) genes, at 24 and 30 hpf. The increase in apoptosis-related proteins (BAD and BCL2) in the tbx5a morphant influenced the cellular component of pectoral fins and resulted in chondrocyte reduction throughout the different CC phases. Furthermore, tbx5a knockdown interfered with ECM formation in pectoral fins, affecting glycosaminoglycans, fibronectin, hyaluronic acid (HA), and N-cadherin. Our results provide evidence that the pectoral fin phenotypic anomaly induced by tbx5a knockdown is related to disruption of the mesoderm and ECM, consequently interfering with mesoderm migration, CC, and subsequent endoskeleton formation

    Perspective on training fully connected networks with resistive memories: Device requirements for multiple conductances of varying significance

    No full text
    Novel Deep Neural Network (DNN) accelerators based on crossbar arrays of non-volatile memories (NVMs)-such as Phase-Change Memory or Resistive Memory-can implement multiply-accumulate operations in a highly parallelized fashion. In such systems, computation occurs in the analog domain at the location of weight data encoded into the conductances of the NVM devices. This allows DNN training of fully-connected layers to be performed faster and with less energy. Using a mixed-hardware-software experiment, we recently showed that by encoding each weight into four distinct physical devices-a "Most Significant Conductance" pair (MSP) and a "Least Significant Conductance" pair (LSP)-we can train DNNs to software-equivalent accuracy despite the imperfections of real analog memory devices. We surmised that, by dividing the task of updating and maintaining weight values between the two conductance pairs, this approach should significantly relax the otherwise quite stringent device requirements. In this paper, we quantify these relaxed requirements for analog memory devices exhibiting a saturating conductance response, assuming either an immediate or a delayed steep initial slope in conductance change. We discuss requirements on the LSP imposed by the "Open Loop Tuning" performed after each training example and on the MSP due to the "Closed Loop Tuning" performed periodically for weight transfer between the conductance pairs. Using simulations to evaluate the final generalization accuracy of a trained four-neuronlayer fully-connected network, we quantify the required dynamic range (as controlled by the size of the steep initial jump), the tolerable device-to-device variability in both maximum conductance and maximum conductance change, the tolerable pulse-to-pulse variability in conductance change, and the tolerable device yield, for both the LSP and MSP devices. We also investigate various Closed Loop Tuning strategies and describe the impact of the MSP/LSP approach on device endurance. Published by AIP Publishing

    Using the IBM analog in-memory hardware acceleration kit for neural network training and inference

    No full text
    Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics and the non-ideal peripheral circuitry in AIMC chips require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this Tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples of how users can expand and customize AIHWKit for their own needs. This Tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial
    corecore