270,663 research outputs found

    A^2-Net: Molecular Structure Estimation from Cryo-EM Density Volumes

    Full text link
    Constructing of molecular structural models from Cryo-Electron Microscopy (Cryo-EM) density volumes is the critical last step of structure determination by Cryo-EM technologies. Methods have evolved from manual construction by structural biologists to perform 6D translation-rotation searching, which is extremely compute-intensive. In this paper, we propose a learning-based method and formulate this problem as a vision-inspired 3D detection and pose estimation task. We develop a deep learning framework for amino acid determination in a 3D Cryo-EM density volume. We also design a sequence-guided Monte Carlo Tree Search (MCTS) to thread over the candidate amino acids to form the molecular structure. This framework achieves 91% coverage on our newly proposed dataset and takes only a few minutes for a typical structure with a thousand amino acids. Our method is hundreds of times faster and several times more accurate than existing automated solutions without any human intervention.Comment: 8 pages, 5 figures, 4 table

    E(3)×SO(3)E(3) \times SO(3)-Equivariant Networks for Spherical Deconvolution in Diffusion MRI

    Full text link
    We present Roto-Translation Equivariant Spherical Deconvolution (RT-ESD), an E(3)×SO(3)E(3)\times SO(3) equivariant framework for sparse deconvolution of volumes where each voxel contains a spherical signal. Such 6D data naturally arises in diffusion MRI (dMRI), a medical imaging modality widely used to measure microstructure and structural connectivity. As each dMRI voxel is typically a mixture of various overlapping structures, there is a need for blind deconvolution to recover crossing anatomical structures such as white matter tracts. Existing dMRI work takes either an iterative or deep learning approach to sparse spherical deconvolution, yet it typically does not account for relationships between neighboring measurements. This work constructs equivariant deep learning layers which respect to symmetries of spatial rotations, reflections, and translations, alongside the symmetries of voxelwise spherical rotations. As a result, RT-ESD improves on previous work across several tasks including fiber recovery on the DiSCo dataset, deconvolution-derived partial volume estimation on real-world \textit{in vivo} human brain dMRI, and improved downstream reconstruction of fiber tractograms on the Tractometer dataset. Our implementation is available at https://github.com/AxelElaldi/e3so3_convComment: Accepted to Medical Imaging with Deep Learning (MIDL) 2023. Code available at https://github.com/AxelElaldi/e3so3_conv . 19 pages with 6 figure

    American Sign Language Recognition Using Machine Learning and Computer Vision

    Get PDF
    Speech impairment is a disability which affects an individual’s ability to communicate using speech and hearing. People who are affected by this use other media of communication such as sign language. Although sign language is ubiquitous in recent times, there remains a challenge for non-sign language speakers to communicate with sign language speakers or signers. With recent advances in deep learning and computer vision there has been promising progress in the fields of motion and gesture recognition using deep learning and computer vision-based techniques. The focus of this work is to create a vision-based application which offers sign language translation to text thus aiding communication between signers and non-signers. The proposed model takes video sequences and extracts temporal and spatial features from them. We then use Inception, a CNN (Convolutional Neural Network) for recognizing spatial features. We then use an RNN (Recurrent Neural Network) to train on temporal features. The dataset used is the American Sign Language Dataset

    Regulation of Local Translation, Synaptic Plasticity, and Cognitive Function by CNOT7

    Get PDF
    Local translation of mRNAs in dendrites is vital for synaptic plasticity and learning and memory. Tight regulation of this translation is key to preventing neurological disorders resulting from aberrant local translation. Here we find that CNOT7, the major deadenylase in eukaryotic cells, takes on the distinct role of regulating local translation in the hippocampus. Depletion of CNOT7 from cultured neurons affects the poly(A) state, localization, and translation of dendritic mRNAs while having little effect on the global neuronal mRNA population. Following synaptic activity, CNOT7 is rapidly degraded resulting in polyadenylation and a change in the localization of its target mRNAs. We find that this degradation of CNOT7 is essential for synaptic plasticity to occur as keeping CNOT7 levels high prevents these changes. This regulation of dendritic mRNAs by CNOT7 is necessary for normal neuronal function in vivo, as depletion of CNOT7 also disrupts learning and memory in mice. We utilized deep sequencing to identify the neuronal mRNAs whose poly(A) state is governed by CNOT7. Interestingly these mRNAs can be separated into two distinct populations: ones that gain a poly(A) tail following CNOT7 depletion and ones that surprisingly lose their poly(A) tail following CNOT7 depletion. These two populations are also distinct based on the lengths of their 3’ UTRs and their codon usage, suggesting that these key features may dictate how CNOT7 acts on its target mRNAs. This work reveals a central role for CNOT7 in the hippocampus where it governs local translation and higher cognitive function

    Deepfakes Generated by Generative Adversarial Networks

    Get PDF
    Deep learning is a type of Artificial Intelligence (AI) that mimics the workings of the human brain in processing data such as speech recognition, visual object recognition, object detection, language translation, and making decisions. A Generative adversarial network (GAN) is a special type of deep learning, designed by Goodfellow et al. (2014), which is what we call convolution neural networks (CNN). How a GAN works is that when given a training set, they can generate new data with the same information as the training set, and this is often what we refer to as deep fakes. CNN takes an input image, assigns learnable weights and biases to various aspects of the object and is able to differentiate one from the other. This is similar to what GAN does, it creates two neural networks called discriminator and generator, and they work together to differentiate the sample input from the generated input (deep fakes). Deep fakes is a machine learning technique where a person in an existing image or video is replaced by someone else’s likeness. Deep fakes have become a problem in society because it allows anyone’s image to be co-opted and calls into question our ability to trust what we see. In this project we develop a GAN to generate deepfakes. Next, we develop a survey to determine if participants are able to identify authentic versus deep fake images. The survey employed a questionnaire asking participants their perception on AI technology based on their overall familiarity of AI, deep fake generation, reliability and trustworthiness of AI, as well as testing to see if subjects can distinguish real versus deep fake images. Results show demographic differences in perceptions of AI and that humans are good at distinguishing real images from deep fakes
    • …
    corecore