1,274 research outputs found
Neuromodulatory effects on early visual signal processing
Understanding how the brain processes information and generates simple to complex behavior constitutes one of the core objectives in systems neuroscience. However, when studying different neural circuits, their dynamics and interactions researchers often assume fixed connectivity, overlooking a crucial factor - the effect of neuromodulators. Neuromodulators can modulate circuit activity depending on several aspects, such as different brain states or sensory contexts. Therefore, considering the modulatory effects of neuromodulators on the functionality of neural circuits is an indispensable step towards a more complete picture of the brain’s ability to process information. Generally, this issue affects all neural systems; hence this thesis tries to address this with an experimental and computational approach to resolve neuromodulatory effects on cell type-level in a well-define system, the mouse retina. In the first study, we established and applied a machine-learning-based classification algorithm to identify individual functional retinal ganglion cell types, which enabled detailed cell type-resolved analyses. We applied the classifier to newly acquired data of light-evoked retinal ganglion cell responses and successfully identified their functional types. Here, the cell type-resolved analysis revealed that a particular principle of efficient coding applies to all types in a similar way. In a second study, we focused on the issue of inter-experimental variability that can occur during the process of pooling datasets. As a result, further downstream analyses may be complicated by the subtle variations between the individual datasets. To tackle this, we proposed a theoretical framework based on an adversarial autoencoder with the objective to remove inter-experimental variability from the pooled dataset, while preserving the underlying biological signal of interest. In the last study of this thesis, we investigated the functional effects of the neuromodulator nitric oxide on the retinal output signal. To this end, we used our previously developed retinal ganglion cell type classifier to unravel type-specific effects and established a paired recording protocol to account for type-specific time-dependent effects. We found that certain
retinal ganglion cell types showed adaptational type-specific changes and that nitric oxide had a distinct modulation of a particular group of retinal ganglion cells.
In summary, I first present several experimental and computational methods that allow to
study functional neuromodulatory effects on the retinal output signal in a cell type-resolved manner and, second, use these tools to demonstrate their feasibility to study the neuromodulator nitric oxide
Complex systems methods characterizing nonlinear processes in the near-Earth electromagnetic environment: recent advances and open challenges
Learning from successful applications of methods originating in statistical mechanics, complex systems science, or information theory in one scientific field (e.g., atmospheric physics or climatology) can provide important insights or conceptual ideas for other areas (e.g., space sciences) or even stimulate new research questions and approaches. For instance, quantification and attribution of dynamical complexity in output time series of nonlinear dynamical systems is a key challenge across scientific disciplines. Especially in the field of space physics, an early and accurate detection of characteristic dissimilarity between normal and abnormal states (e.g., pre-storm activity vs. magnetic storms) has the potential to vastly improve space weather diagnosis and, consequently, the mitigation of space weather hazards.
This review provides a systematic overview on existing nonlinear dynamical systems-based methodologies along with key results of their previous applications in a space physics context, which particularly illustrates how complementary modern complex systems approaches have recently shaped our understanding of nonlinear magnetospheric variability. The rising number of corresponding studies demonstrates that the multiplicity of nonlinear time series analysis methods developed during the last decades offers great potentials for uncovering relevant yet complex processes interlinking different geospace subsystems, variables and spatiotemporal scales
Machine learning in solar physics
The application of machine learning in solar physics has the potential to
greatly enhance our understanding of the complex processes that take place in
the atmosphere of the Sun. By using techniques such as deep learning, we are
now in the position to analyze large amounts of data from solar observations
and identify patterns and trends that may not have been apparent using
traditional methods. This can help us improve our understanding of explosive
events like solar flares, which can have a strong effect on the Earth
environment. Predicting hazardous events on Earth becomes crucial for our
technological society. Machine learning can also improve our understanding of
the inner workings of the sun itself by allowing us to go deeper into the data
and to propose more complex models to explain them. Additionally, the use of
machine learning can help to automate the analysis of solar data, reducing the
need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a
Living Review in Solar Physics (LRSP
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Synthetic Aperture Radar (SAR) Meets Deep Learning
This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
Recommended from our members
Machine Learning Framework for Causal Modeling for Process Fault Diagnosis and Mechanistic Explanation Generation
Machine learning models, typically deep learning models, often come at the cost of explainability. To generate explanations of such systems, models need to be rooted in first-principles, at least mechanistically. In this work we look at a gamete of machine learning models based on different levels of process knowledge for process fault diagnosis and generating mechanistic explanations of processes. In chapter 1, we introduce the thesis using a range of problems from causality, explainability, aiming towards the goal of generating mechanistic explanations of process systems. Chapter 2 looks at an approach for generating causal models purely through data-centric approach, with minimal process knowledge with respect to equipment connectivity and identifying causality in the domains. These causal models generated can be utilized for process fault diagnosis.
Chapter 3 and chapter 4 show how deep learning models can be used for both classification for process fault diagnosis and regression. We see that depending on the hyperparameters, i.e., purely the breadth and depth of a neural network, the learned hidden representations vary from a simple set of features, to more complex sets of features. While these hidden representations may be exploited to aid in classification and regression problems, the true explanations of these representations do not correlate with mechanisms in the system of interest. There is thus a requirement to add more mechanistic information about the features generated to aid in explainability.
Chapter 5 shows how incorporating process knowledge can aid in generating such mechanistic explanations based on automated variable transformations. In this chapter we show how process knowledge can be used to generate features, or model forms to generate explainable models. These models have the ability of extracting the true models of the system from the model knowledge provided
Leveraging audio-visual speech effectively via deep learning
The rising popularity of neural networks, combined with the recent proliferation of online audio-visual media, has led to a revolution in the way machines encode, recognize, and generate acoustic and visual speech. Despite the ubiquity of naturally paired audio-visual data, only a limited number of works have applied recent advances in deep learning to leverage the duality between audio and video within this domain. This thesis considers the use of neural networks to learn from large unlabelled datasets of audio-visual speech to enable new practical applications. We begin by training a visual speech encoder that predicts latent features extracted from the corresponding audio on a large unlabelled audio-visual corpus. We apply the trained visual encoder to improve performance on lip reading in real-world scenarios. Following this, we extend the idea of video learning from audio by training a model to synthesize raw speech directly from raw video, without the need for text transcriptions. Remarkably, we find that this framework is capable of reconstructing intelligible audio from videos of new, previously unseen speakers. We also experiment with a separate speech reconstruction framework, which leverages recent advances in sequence modeling and spectrogram inversion to improve the realism of the generated speech. We then apply our research in video-to-speech synthesis to advance the state-of-the-art in audio-visual speech enhancement, by proposing a new vocoder-based model that performs particularly well under extremely noisy scenarios. Lastly, we aim to fully realize the potential of paired audio-visual data by proposing two novel frameworks that leverage acoustic and visual speech to train two encoders that learn from each other simultaneously. We leverage these pre-trained encoders for deepfake detection, speech recognition, and lip reading, and find that they consistently yield improvements over training from scratch.Open Acces
Computational Imaging for Phase Retrieval and Biomedical Applications
In conventional imaging, optimizing hardware is prioritized to enhance image quality directly. Digital signal processing is viewed as supplementary. Computational imaging intentionally distorts images through modulation schemes in illumination or sensing. Then its reconstruction algorithms extract desired object information from raw data afterwards. Co-designing hardware and algorithms reduces demands on hardware and achieves the same or even better image quality. Algorithm design is at the heart of computational imaging, with model-based inverse problem or data-driven deep learning methods as approaches. This thesis presents research work from both perspectives, with a primary focus on the phase retrieval issue in computational microscopy and the application of deep learning techniques to address biomedical imaging challenges.
The first half of the thesis begins with Fourier ptychography, which was employed to overcome chromatic aberration problems in multispectral imaging. Then, we proposed a novel computational coherent imaging modality based on Kramers-Kronig relations, aiming to replace Fourier ptychography as a non-iterative method. While this approach showed promise, it lacks certain essential characteristics of the original Fourier ptychography. To address this limitation, we introduced two additional algorithms to form a whole package scheme. Through comprehensive evaluation, we demonstrated that the combined scheme outperforms Fourier ptychography in achieving high-resolution, large field-of-view, aberration-free coherent imaging.
The second half of the thesis shifts focus to deep-learning-based methods. In one project, we optimized the scanning strategy and image processing pipeline of an epifluorescence microscope to address focus issues. Additionally, we leveraged deep-learning-based object detection models to automate cell analysis tasks. In another project, we predicted the polarity status of mouse embryos from bright field images using adapted deep learning models. These findings highlight the capability of computational imaging to automate labor-intensive processes, and even outperform humans in challenging tasks.</p
A comprehensive review of 3D convolutional neural network-based classification techniques of diseased and defective crops using non-UAV-based hyperspectral images
Hyperspectral imaging (HSI) is a non-destructive and contactless technology that provides valuable information about the structure and composition of an object. It has the ability to capture detailed information about the chemical and physical properties of agricultural crops. Due to its wide spectral range, compared with multispectral-or RGB-based imaging methods, HSI can be a more effective tool for monitoring crop health and productivity. With the advent of this imaging tool in agrotechnology, researchers can more accurately address issues related to the detection of diseased and defective crops in the agriculture industry. This allows to implement the most suitable and accurate farming solutions, such as irrigation and fertilization, before crops enter a damaged and difficult-to-recover phase of growth in the field. While HSI provides valuable insights into the object under investigation, the limited number of HSI datasets for crop evaluation presently poses a bottleneck. Dealing with the curse of dimensionality presents another challenge due to the abundance of spectral and spatial information in each hyperspectral cube. State-of-the-art methods based on 1D and 2D convolutional neural networks (CNNs) struggle to efficiently extract spectral and spatial information. On the other hand, 3D-CNN-based models have shown significant promise in achieving better classification and detection results by leveraging spectral and spatial features simultaneously. Despite the apparent benefits of 3D-CNN-based models, their usage for classification purposes in this area of research has remained limited. This paper seeks to address this gap by reviewing 3D-CNN-based architectures and the typical deep learning pipeline, including preprocessing and visualization of results, for the classification of hyperspectral images of diseased and defective crops. Furthermore, we discuss open research areas and challenges when utilizing 3D-CNNs with HSI data."This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors."https://www.sciencedirect.com/science/article/pii/S277237552300145
- …