48 research outputs found
Epigenomic and transcriptomic analysis of developing, adult and aging brain: mechanisms of brain folding, neuronal function and finding novel therapy for dementia
Histone modifications and gene expression are tightly regulated processes in the brain that has been shown to play crucial role from the beginning of brain development, learning-memory formation and aging. While brain comprises of numerous types of neurons and non-neuronal cells, this regulation is highly cell type specific. To gain more mechanistic insights on cell type specific epigenetic and transcriptomic processes, in this thesis, I demonstrated brain nuclei isolation, cell nuclei specific antibody staining and FACS sorting can be successfully utilized to perform cell type specific genome wide histone mark characterization, gene expression and single nuclei RNA sequencing. I have applied these tools to gain valuable mechanistic insights of the causal epigenetic mechanism for cortical folding, functional role of a histone methyltransferase in memory impairment, and multi omics-based characterization of aged induced cognitive decline model.
In the first manuscript, we found that embryonic mice treated with histone deacetylase inhibitors (therefore, increasing histone acetylation) led to higher amounts of basal progenitor (BP) cells in their cortex. This resulted into higher number of mature neurons, thereby producing cortical gyration phenotypes in lissencephalic rodent brains. To understand causal mechanisms, I established and performed for the first time, BP nuclei specific gene expression and histone 3 lysine 9(H3K9) acetylation dataset from embryonic mice cortex. This cell type specific analysis led to discovering distinct increased H3K9ac induced gene expression signature, that contained key regulatory transcription factor, resulting into higher amount of BP proliferation. Further validation experiments via epigenome editing confirmed the epigenetic basis of cortical gyrification in a lissencephalic brain via increasing histone acetylation.
For the second manuscript, I investigated the molecular role of a histone methyltransferase (HMT), Setd1b in mature neurons. Forebrain excitatory neuron specific Setd1B conditional knockout (cKO) resulted into severe memory impairment which required further characterization of neuron specific epigenetic and transcriptomic perturbation due to this cKO. To understand molecular function of Setd1b cKO in neurons, I isolated neuron specific nuclei from WT vs cKO mice hippocampal CA region and performed 4 different histone modification ChIPseq (H3K4me3, H3K4me1, H3K9ac, H3K27ac) and neuron specific nuclear RNA seq. Bioinformatic data analysis revealed promoter specific alteration of all 4 marks and significant down regulation of memory forming genes. Comparison with other two previously studied HMT revealed Setd1b to be having broadest H3K4me3 peaks and regulating distinct sets of genes, which manifested to the severe most behavioral deficit. To understand expression pattern of those three HMTs, I performed single nuclei RNA sequencing of sorted neurons from wild type mice and found, even though Setd1b is expressed in a small subset of neurons, those neurons had the highest level of neuronal function and memory forming gene expression, compared to other two HMT expressing neurons studied previously by our group. Overall, our work shows neuron specific role of Setd1b and its contribution towards hippocampal memory formation.
In the third manuscript, I applied neuronal and non-neuronal epigenome and transcriptome data generation and analysis of 3 vs 16 months old mice. As it is well known that memory impairment starts during the middle of life, and previous gene expression studies in mice showed very little to no changes while having cognitive deficit, I utilized nuclei based cell sorting method to study two promoter epigenetic marks(H3K4me3, H3K27me3) and RNA expression (including coding and non-coding) in neuronal and non-neuronal cells separately. Due to the novelty of the data, I first characterized the basal activatory H3K4me3, inhibitory H3K27me3, bivalent regions and gene expression in neuronal and non-neuronal nuclei. These epigenomic and transcriptomic datasets would be a valuable resource to the community to compare cell type specific gene expression and epigenomes with their datasets. Moreover, profiling epigenetic marks in old hippocampal CA1 neurons and non-neurons revealed massive decrease of epigenetic marks mostly in the non-neurons, while neurons only had decreased inhibitory H3K27me3 mark. Mechanistically, these epigenome changes correspond to probable non-neuronal dysfunction and neuronal upregulation of aberrant developmental pathways. Surprisingly, nuclear RNAseq revealed significant number of genes deregulated in non-neuronal cells, compared to neurons. By integrating transcriptome and epigenome, I found decreased H3K4me3 leading to decreased gene expression in non-neuronal cells, that resulted into probably downregulated neuronal support function and downregulated important glial metabolic pathways related to extra cellular matrix.
Therefore, in this thesis, I have described cell type specific neurodevelopmental, neuronal and cognitive decline related epigenetic and transcriptional pathways that would add valuable knowledge and resources to the neuroscientific community.2021-12-3
Towards designing AI-aided lightweight solutions for key challenges in sensing, communication and computing layers of IoT: smart health use-cases
The advent of the 5G and Beyond 5G (B5G) communication system, along with the
proliferation of the Internet of Things (IoT) and Artificial Intelligence (AI), have started to
evolve the vision of the smart world into a reality. Similarly, the Internet of Medical Things
(IoMT) and AI have introduced numerous new dimensions towards attaining intelligent and
connected mobile health (mHealth). The demands of continuous remote health monitoring
with automated, lightweight, and secure systems have massively escalated. The AI-driven
IoT/IoMT can play an essential role in sufficing this demand, but there are several challenges in attaining it. We can look into these emerging hurdles in IoT from three directions:
the sensing layer, the communication layer, and the computing layer. Existing centralized
remote cloud-based AI analytics is not adequate for solving these challenges, and we need
to emphasize bringing the analytics into the ultra-edge IoT. Furthermore, from the communication perspective, the conventional techniques are not viable for the practical delivery of
health data in dynamic network conditions in 5G and B5G network systems. Therefore, we
need to go beyond the traditional realm and press the need to incorporate lightweight AI
architecture to solve various challenges in the three mentioned IoT planes, enhancing the
healthcare system in decision making and health data transmission.
In this thesis, we present different AI-enabled techniques to provide practical and lightweight
solutions to some selected challenges in the three IoT planes
From Cooking Recipes to Robot Task Trees -- Improving Planning Correctness and Task Efficiency by Leveraging LLMs with a Knowledge Network
Task planning for robotic cooking involves generating a sequence of actions
for a robot to prepare a meal successfully. This paper introduces a novel task
tree generation pipeline producing correct planning and efficient execution for
cooking tasks. Our method first uses a large language model (LLM) to retrieve
recipe instructions and then utilizes a fine-tuned GPT-3 to convert them into a
task tree, capturing sequential and parallel dependencies among subtasks. The
pipeline then mitigates the uncertainty and unreliable features of LLM outputs
using task tree retrieval. We combine multiple LLM task tree outputs into a
graph and perform a task tree retrieval to avoid questionable nodes and
high-cost nodes to improve planning correctness and improve execution
efficiency. Our evaluation results show its superior performance compared to
previous works in task planning accuracy and efficiency
Robotic Detection of a Human-Comprehensible Gestural Language for Underwater Multi-Human-Robot Collaboration
In this paper, we present a motion-based robotic communication framework that
enables non-verbal communication among autonomous underwater vehicles (AUVs)
and human divers. We design a gestural language for AUV-to-AUV communication
which can be easily understood by divers observing the conversation unlike
typical radio frequency, light, or audio based AUV communication. To allow AUVs
to visually understand a gesture from another AUV, we propose a deep network
(RRCommNet) which exploits a self-attention mechanism to learn to recognize
each message by extracting maximally discriminative spatio-temporal features.
We train this network on diverse simulated and real-world data. Our
experimental evaluations, both in simulation and in closed-water robot trials,
demonstrate that the proposed RRCommNet architecture is able to decipher
gesture-based messages with an average accuracy of 88-94% on simulated data,
73-83% on real data (depending on the version of the model used). Further, by
performing a message transcription study with human participants, we also show
that the proposed language can be understood by humans, with an overall
transcription accuracy of 88%. Finally, we discuss the inference runtime of
RRCommNet on embedded GPU hardware, for real-time use on board AUVs in the
field
Underwater Image Super-Resolution using Deep Residual Multipliers
We present a deep residual network-based generative model for single image
super-resolution (SISR) of underwater imagery for use by autonomous underwater
robots. We also provide an adversarial training pipeline for learning SISR from
paired data. In order to supervise the training, we formulate an objective
function that evaluates the \textit{perceptual quality} of an image based on
its global content, color, and local style information. Additionally, we
present USR-248, a large-scale dataset of three sets of underwater images of
'high' (640x480) and 'low' (80x60, 160x120, and 320x240) spatial resolution.
USR-248 contains paired instances for supervised training of 2x, 4x, or 8x SISR
models. Furthermore, we validate the effectiveness of our proposed model
through qualitative and quantitative experiments and compare the results with
several state-of-the-art models' performances. We also analyze its practical
feasibility for applications such as scene understanding and attention modeling
in noisy visual conditions
Ensemble learning of diffractive optical networks
A plethora of research advances have emerged in the fields of optics and
photonics that benefit from harnessing the power of machine learning.
Specifically, there has been a revival of interest in optical computing
hardware, due to its potential advantages for machine learning tasks in terms
of parallelization, power efficiency and computation speed. Diffractive Deep
Neural Networks (D2NNs) form such an optical computing framework, which
benefits from deep learning-based design of successive diffractive layers to
all-optically process information as the input light diffracts through these
passive layers. D2NNs have demonstrated success in various tasks, including
e.g., object classification, spectral-encoding of information, optical pulse
shaping and imaging, among others. Here, we significantly improve the inference
performance of diffractive optical networks using feature engineering and
ensemble learning. After independently training a total of 1252 D2NNs that were
diversely engineered with a variety of passive input filters, we applied a
pruning algorithm to select an optimized ensemble of D2NNs that collectively
improve their image classification accuracy. Through this pruning, we
numerically demonstrated that ensembles of N=14 and N=30 D2NNs achieve blind
testing accuracies of 61.14% and 62.13%, respectively, on the classification of
CIFAR-10 test images, providing an inference improvement of >16% compared to
the average performance of the individual D2NNs within each ensemble. These
results constitute the highest inference accuracies achieved to date by any
diffractive optical neural network design on the same dataset and might provide
a significant leapfrog to extend the application space of diffractive optical
image classification and machine vision systems.Comment: 22 Pages, 4 Figures, 1 Tabl
Study of Hybrid Photovoltaic Thermal (PV/T) Solar System with Modification of Thin Metallic Sheet in the Air Channel.
The increase of the temperature of PV module gradually decreases the electricity production. To eliminate this problem thermal collector is incorporated with the PV module to allow PV cooling. It has been found that PV cooling increases the electricity production and allows the extra heat to be absorbed by the coolant extracting thermal output. This system is called hybrid PV/T system where water and air can be used as the heat extraction medium. Use of Thin Flat Metallic Sheet (TFMS) in the air channel in PV/T system increase the temperature of the air considerably which has found in several experiments. The comparative performance of PV/T system using four types of shape for thin metallic sheet including flat sheet was investigated. The performance was investigated at Islamic University of Technology in Bangladesh by using an experimental hybrid PV/T system at outdoor .The experiment shows that efficiency of the PV/T system varies significantly with the variation of the shape of the metallic sheet in the air channel. The used shape was flat, saw tooth backward, saw tooth forward and trapezoidal. By the experimental results it is found that the efficiency of the flat metallic sheet is the lowest among the four. Saw tooth backward and saw tooth forward shows the same efficiency and trapezoidal metallic sheet is lower than that
Universal Linear Intensity Transformations Using Spatially-Incoherent Diffractive Processors
Under spatially-coherent light, a diffractive optical network composed of
structured surfaces can be designed to perform any arbitrary complex-valued
linear transformation between its input and output fields-of-view (FOVs) if the
total number (N) of optimizable phase-only diffractive features is greater than
or equal to ~2 Ni x No, where Ni and No refer to the number of useful pixels at
the input and the output FOVs, respectively. Here we report the design of a
spatially-incoherent diffractive optical processor that can approximate any
arbitrary linear transformation in time-averaged intensity between its input
and output FOVs. Under spatially-incoherent monochromatic light, the
spatially-varying intensity point spread functon(H) of a diffractive network,
corresponding to a given, arbitrarily-selected linear intensity transformation,
can be written as H(m,n;m',n')=|h(m,n;m',n')|^2, where h is the
spatially-coherent point-spread function of the same diffractive network, and
(m,n) and (m',n') define the coordinates of the output and input FOVs,
respectively. Using deep learning, supervised through examples of input-output
profiles, we numerically demonstrate that a spatially-incoherent diffractive
network can be trained to all-optically perform any arbitrary linear intensity
transformation between its input and output if N is greater than or equal to ~2
Ni x No. These results constitute the first demonstration of universal linear
intensity transformations performed on an input FOV under spatially-incoherent
illumination and will be useful for designing all-optical visual processors
that can work with incoherent, natural light.Comment: 29 Pages, 10 Figure
Learning Diffractive Optical Communication Around Arbitrary Opaque Occlusions
Free-space optical systems are emerging for high data rate communication and
transfer of information in indoor and outdoor settings. However, free-space
optical communication becomes challenging when an occlusion blocks the light
path. Here, we demonstrate, for the first time, a direct communication scheme,
passing optical information around a fully opaque, arbitrarily shaped obstacle
that partially or entirely occludes the transmitter's field-of-view. In this
scheme, an electronic neural network encoder and a diffractive optical network
decoder are jointly trained using deep learning to transfer the optical
information or message of interest around the opaque occlusion of an arbitrary
shape. The diffractive decoder comprises successive spatially-engineered
passive surfaces that process optical information through light-matter
interactions. Following its training, the encoder-decoder pair can communicate
any arbitrary optical information around opaque occlusions, where information
decoding occurs at the speed of light propagation. For occlusions that change
their size and/or shape as a function of time, the encoder neural network can
be retrained to successfully communicate with the existing diffractive decoder,
without changing the physical layer(s) already deployed. We also validate this
framework experimentally in the terahertz spectrum using a 3D-printed
diffractive decoder to communicate around a fully opaque occlusion. Scalable
for operation in any wavelength regime, this scheme could be particularly
useful in emerging high data-rate free-space communication systems.Comment: 23 Pages, 9 Figure