4,577 research outputs found
Neuromodulatory effects on early visual signal processing
Understanding how the brain processes information and generates simple to complex behavior constitutes one of the core objectives in systems neuroscience. However, when studying different neural circuits, their dynamics and interactions researchers often assume fixed connectivity, overlooking a crucial factor - the effect of neuromodulators. Neuromodulators can modulate circuit activity depending on several aspects, such as different brain states or sensory contexts. Therefore, considering the modulatory effects of neuromodulators on the functionality of neural circuits is an indispensable step towards a more complete picture of the brain’s ability to process information. Generally, this issue affects all neural systems; hence this thesis tries to address this with an experimental and computational approach to resolve neuromodulatory effects on cell type-level in a well-define system, the mouse retina. In the first study, we established and applied a machine-learning-based classification algorithm to identify individual functional retinal ganglion cell types, which enabled detailed cell type-resolved analyses. We applied the classifier to newly acquired data of light-evoked retinal ganglion cell responses and successfully identified their functional types. Here, the cell type-resolved analysis revealed that a particular principle of efficient coding applies to all types in a similar way. In a second study, we focused on the issue of inter-experimental variability that can occur during the process of pooling datasets. As a result, further downstream analyses may be complicated by the subtle variations between the individual datasets. To tackle this, we proposed a theoretical framework based on an adversarial autoencoder with the objective to remove inter-experimental variability from the pooled dataset, while preserving the underlying biological signal of interest. In the last study of this thesis, we investigated the functional effects of the neuromodulator nitric oxide on the retinal output signal. To this end, we used our previously developed retinal ganglion cell type classifier to unravel type-specific effects and established a paired recording protocol to account for type-specific time-dependent effects. We found that certain
retinal ganglion cell types showed adaptational type-specific changes and that nitric oxide had a distinct modulation of a particular group of retinal ganglion cells.
In summary, I first present several experimental and computational methods that allow to
study functional neuromodulatory effects on the retinal output signal in a cell type-resolved manner and, second, use these tools to demonstrate their feasibility to study the neuromodulator nitric oxide
Recommended from our members
FRET-based dynamic structural biology: Challenges, perspectives and an appeal for open-science practices
Single-molecule FRET (smFRET) has become a mainstream technique for studying biomolecular structural dynamics. The rapid and wide adoption of smFRET experiments by an ever-increasing number of groups has generated significant progress in sample preparation, measurement procedures, data analysis, algorithms and documentation. Several labs that employ smFRET approaches have joined forces to inform the smFRET community about streamlining how to perform experiments and analyze results for obtaining quantitative information on biomolecular structure and dynamics. The recent efforts include blind tests to assess the accuracy and the precision of smFRET experiments among different labs using various procedures. These multi-lab studies have led to the development of smFRET procedures and documentation, which are important when submitting entries into the archiving system for integrative structure models, PDB-Dev. This position paper describes the current ‘state of the art’ from different perspectives, points to unresolved methodological issues for quantitative structural studies, provides a set of ‘soft recommendations’ about which an emerging consensus exists, and lists openly available resources for newcomers and seasoned practitioners. To make further progress, we strongly encourage ‘open science’ practices
ScribFormer: Transformer Makes CNN Work Better for Scribble-based Medical Image Segmentation
Most recent scribble-supervised segmentation methods commonly adopt a CNN framework with an encoder-decoder architecture. Despite its multiple benefits, this framework generally can only capture small-range feature dependency for the convolutional layer with the local receptive field, which makes it difficult to learn global shape information from the limited information provided by scribble annotations. To address this issue, this paper proposes a new CNN-Transformer hybrid solution for scribble-supervised medical image segmentation called ScribFormer. The proposed ScribFormer model has a triple-branch structure, i.e., the hybrid of a CNN branch, a Transformer branch, and an attention-guided class activation map (ACAM) branch. Specifically, the CNN branch collaborates with the Transformer branch to fuse the local features learned from CNN with the global representations obtained from Transformer, which can effectively overcome limitations of existing scribble-supervised segmentation methods. Furthermore, the ACAM branch assists in unifying the shallow convolution features and the deep convolution features to improve model’s performance further. Extensive experiments on two public datasets and one private dataset show that our ScribFormer has superior performance over the state-of-the-art scribble-supervised segmentation methods, and achieves even better results than the fully-supervised segmentation methods. The code is released at https://github.com/HUANGLIZI/ScribFormer
Urban building energy performance prediction and retrofit analysis using data-driven machine learning approach
Stakeholders such as urban planners and energy policymakers use building energy performance modeling and analysis to develop strategic sustainable energy plans with the aim of reducing energy consumption and emissions from the built environment. However, inconsistent energy data and the lack of scalable building models create a gap between building energy modeling and traditional planning practices. An alternative approach is to conduct a large-scale energy usage survey, which is time-consuming. Similarly, existing studies rely on traditional machine learning or statistical approaches for calculating large-scale energy performance. This paper proposes a solution that employs a data-driven machine learning approach to predict the energy performance of urban residential buildings, using both ensemble-based machine learning and end-use demand segregation methods. The proposed methodology consists of five steps: data collection, archetype development, physics-based parametric modeling, machine learning modeling, and urban building energy performance analysis. The devised methodology is tested on the Irish residential building stock and generates a synthetic building dataset of one million buildings through the parametric modeling of 19 identified vital variables for four residential building archetypes. As a part of the machine learning modeling process, the study implemented an end-use demand segregation method, including heating, lighting, equipment, photovoltaic, and hot water, to predict the energy performance of buildings at an urban scale. Furthermore, the model's performance is enhanced by employing an ensemble-based machine learning approach, achieving 91% accuracy compared to the traditional approach's 76%. Accurate prediction of building energy performance enables stakeholders, including energy policymakers and urban planners, to make informed decisions when planning large-scale retrofit measures
Self-supervised learning for transferable representations
Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
INTEGRATED COMPUTER-AIDED DESIGN, EXPERIMENTATION, AND OPTIMIZATION APPROACH FOR PEROVSKITES AND PETROLEUM PACKAGING PROCESSES
According to the World Economic Forum report, the U.S. currently has an energy efficiency of just 30%, thus illustrating the potential scope and need for efficiency enhancement and waste minimization. In the U.S. energy sector, petroleum and solar energy are the two key pillars that have the potential to create research opportunities for transition to a cleaner, greener, and sustainable future. In this research endeavor, the focus is on two pivotal areas: (i) Computer-aided perovskite solar cell synthesis; and (ii) Optimization of flow processes through multiproduct petroleum pipelines. In the area of perovskite synthesis, the emphasis is on the enhancement of structural stability, lower costs, and sustainability. Utilizing modeling and optimization methods for computer-aided molecular design (CAMD), efficient, sustainable, less toxic, and economically viable alternatives to conventional lead-based perovskites are obtained. In the second area of optimization of flow processes through multiproduct petroleum pipelines, an actual industrial-scale operation for packaging multiple lube-oil blends is studied. Through an integrated approach of experimental characterization, process design, procedural improvements, testing protocols, control mechanisms, mathematical modeling, and optimization, the limitations of traditional packaging operations are identified, and innovative operational paradigms and strategies are developed by incorporating methods from process systems engineering and data-driven approaches
Time-based self-supervised learning for Wireless Capsule Endoscopy
State-of-the-art machine learning models, and especially deep learning ones, are significantly data-hungry; they require vast amounts of manually labeled samples to function correctly. However, in most medical imaging fields, obtaining said data can be challenging. Not only the volume of data is a problem, but also the imbalances within its classes; it is common to have many more images of healthy patients than of those with pathology. Computer-aided diagnostic systems suffer from these issues, usually over-designing their models to perform accurately. This work proposes using self-supervised learning for wireless endoscopy videos by introducing a custom-tailored method that does not initially need labels or appropriate balance. We prove that using the inferred inherent structure learned by our method, extracted from the temporal axis, improves the detection rate on several domain-specific applications even under severe imbalance. State-of-the-art results are achieved in polyp detection, with 95.00 ± 2.09% Area Under the Curve, and 92.77 ± 1.20% accuracy in the CAD-CAP dataset
Inverse Design of Metamaterials for Tailored Linear and Nonlinear Optical Responses Using Deep Learning
The conventional process for developing an optimal design for nonlinear optical responses is based on a trial-and-error approach that is largely inefficient and does not necessarily lead to an ideal result. Deep learning can automate this process and widen the realm of nonlinear geometries and devices. This research illustrates a deep learning framework used to create an optimal plasmonic design for metamaterials with specific desired optical responses, both linear and nonlinear. The algorithm can produce plasmonic patterns that can maximize second-harmonic nonlinear effects of a nonlinear metamaterial. A nanolaminate metamaterial is used as a nonlinear material, and a plasmonic patterns are fabricated on the prepared nanolaminate to demonstrate the validity and efficacy of the deep learning algorithm for second-harmonic generation. Photonic upconversion from the infrared regime to the visible spectrum can occur through sum-frequency generation. The deep learning algorithm was improved to optimize a nonlinear plasmonic metamaterial for sum-frequency generation. The framework was then further expanded using transfer learning to lessen computation resources required to optimize metamaterials for new design parameters. The deep learning architecture applied in this research can be expanded to other optical responses and drive the innovation of novel optical applications.Ph.D
- …