9,891 research outputs found

    An investigation of entorhinal spatial representations in self-localisation behaviours

    Get PDF
    Spatial-modulated cells of the medial entorhinal cortex (MEC) and neighbouring cortices are thought to provide the neural substrate for self-localisation behaviours. These cells include grid cells of the MEC which are thought to compute path integration operations to update self-location estimates. In order to read this grid code, downstream cells are thought to reconstruct a positional estimate as a simple rate-coded representation of space. Here, I show the coding scheme of grid cell and putative readout cells recorded from mice performing a virtual reality (VR) linear location task which engaged mice in both beaconing and path integration behaviours. I found grid cells can encode two unique coding schemes on the linear track, namely a position code which reflects periodic grid fields anchored to salient features of the track and a distance code which reflects periodic grid fields without this anchoring. Grid cells were found to switch between these coding schemes within sessions. When grid cells were encoding position, mice performed better at trials that required path integration but not on trials that required beaconing. This result provides the first mechanistic evidence linking grid cell activity to path integration-dependent behaviour. Putative readout cells were found in the form of ramp cells which fire proportionally as a function of location in defined regions of the linear track. This ramping activity was found to be primarily explained by track position rather than other kinematic variables like speed and acceleration. These representations were found to be maintained across both trial types and outcomes indicating they likely result from recall of the track structure. Together, these results support the functional importance of grid and ramp cells for self-localisation behaviours. Future investigations will look into the coherence between these two neural populations, which may together form a complete neural system for coding and decoding self-location in the brain

    Optofluidic Force Induction as a Process Analytical Technology

    Full text link
    Manufacturers of nanoparticle-based products rely on detailed information about critical process parameters, such as particle size and size distributions, concentration, and material composition, which directly reflect the quality of the final product. These process parameters are often obtained using offline characterization techniques that cannot provide the temporal resolution to detect dynamic changes in particle ensembles during a production process. To overcome this deficiency, we have recently introduced Optofluidic Force Induction (OF2i) for optical real-time counting with single particle sensitivity and high throughput. In this paper, we apply OF2i to highly polydisperse and multi modal particle systems, where we also monitor evolutionary processes over large time scales. For oil-in-water emulsions we detect in real time the transition between high-pressure homogenization states. For silicon carbide nanoparticles, we exploit the dynamic OF2i measurement capabilities to introduce a novel process feedback parameter based on the dissociation of particle agglomerates. Our results demonstrate that OF2i provides a versatile workbench for process feedback in a wide range of applications.Comment: 10 pages, 5 figure

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Irish Ocean Climate and Ecosystem Status Report

    Get PDF
    Summary report for Irish Ocean Climate & Ecosystem Status Report also published here. This Irish Ocean Climate & Ecosystem Status Summary for Policymakers brings together the latest evidence of ocean change in Irish waters. The report is intended to summarise the current trends in atmospheric patterns, ocean warming, sea level rise, ocean acidification, plankton and fish distributions and abundance, and seabird population trends. The report represents a collaboration between marine researchers within the Marine Institute and others based in Ireland’s higher education institutes and public bodies. It includes authors from Met Éireann, Maynooth University, the University of Galway, the Atlantic Technological University, National Parks and Wildlife, Birdwatch Ireland, Trinity College Dublin, University College Dublin, Inland Fisheries Ireland, The National Water Forum, the Environmental Protection Agency, and the Dundalk Institute of Technology.This report is intended to summarise the current trends in Ireland’s ocean climate. Use has been made of archived marine data held by a range of organisations to elucidate some of the key trends observed in phenomena such as atmospheric changes, ocean warming, sea level rise, acidification, plankton and fish distributions and abundance, and seabirds. The report aims to summarise the key findings and recommendations in each of these areas as a guide to climate adaptation policy and for the public. It builds on the previous Ocean Climate & Ecosystem Status Report published in 2010. The report examines the recently published literature in each of the topic areas and combines this in many cases with analysis of new data sets including long-term time series to identify trends in essential ocean variables in Irish waters. In some cases, model projections of the likely future state of the atmosphere and ocean are presented under different climate emission scenarios.Marine Institut

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    DATA AUGMENTATION FOR SYNTHETIC APERTURE RADAR USING ALPHA BLENDING AND DEEP LAYER TRAINING

    Get PDF
    Human-based object detection in synthetic aperture RADAR (SAR) imagery is complex and technical, laboriously slow but time critical—the perfect application for machine learning (ML). Training an ML network for object detection requires very large image datasets with imbedded objects that are accurately and precisely labeled. Unfortunately, no such SAR datasets exist. Therefore, this paper proposes a method to synthesize wide field of view (FOV) SAR images by combining two existing datasets: SAMPLE, which is composed of both real and synthetic single-object chips, and MSTAR Clutter, which is composed of real wide-FOV SAR images. Synthetic objects are extracted from SAMPLE using threshold-based segmentation before being alpha-blended onto patches from MSTAR Clutter. To validate the novel synthesis method, individual object chips are created and classified using a simple convolutional neural network (CNN); testing is performed against the measured SAMPLE subset. A novel technique is also developed to investigate training activity in deep layers. The proposed data augmentation technique produces a 17% increase in the accuracy of measured SAR image classification. This improvement shows that any residual artifacts from segmentation and blending do not negatively affect ML, which is promising for future use in wide-area SAR synthesis.Outstanding ThesisMajor, United States Air ForceApproved for public release. Distribution is unlimited

    Exploring QCD matter in extreme conditions with Machine Learning

    Full text link
    In recent years, machine learning has emerged as a powerful computational tool and novel problem-solving perspective for physics, offering new avenues for studying strongly interacting QCD matter properties under extreme conditions. This review article aims to provide an overview of the current state of this intersection of fields, focusing on the application of machine learning to theoretical studies in high energy nuclear physics. It covers diverse aspects, including heavy ion collisions, lattice field theory, and neutron stars, and discuss how machine learning can be used to explore and facilitate the physics goals of understanding QCD matter. The review also provides a commonality overview from a methodology perspective, from data-driven perspective to physics-driven perspective. We conclude by discussing the challenges and future prospects of machine learning applications in high energy nuclear physics, also underscoring the importance of incorporating physics priors into the purely data-driven learning toolbox. This review highlights the critical role of machine learning as a valuable computational paradigm for advancing physics exploration in high energy nuclear physics.Comment: 146 pages,53 figure

    DeepHTLV: a Deep Learning Framework for Detecting Human T-Lymphotrophic Virus 1 Integration Sites

    Get PDF
    In the 1980s, researchers found the first human oncogenic retrovirus called human T-lymphotrophic virus type 1 (HTLV-1). Since then, HTLV-1 has been identified as the causative agent behind several diseases such as adult T-cell leukemia/lymphoma (ATL) and a HTLV-1 associated myelopathy or tropical spastic paraparesis (HAM/TSP). As part of its normal replication cycle, the genome is converted into DNA and integrated into the genome. With several hundreds to thousands of unique viral integration sites (VISs) distributed with indeterminate preference throughout the genome, detection of HTLV-1 VISs is a challenging task. Experimental studies typically use molecular biology techniques such as fluorescent in-situ hybridization (FISH) or using rt-qPCR (reverse transcriptase quantitative PCR) to detect VISs. While these methods are accurate, they cannot be applied in a high throughput manner. Next generation sequencing (NGS) has generated vast amounts of data, resulting in the development of several computational methods for VIS detection such as VERSE, VirusFinder, or DeepVISP for the task of rapid detection VIS across an entire genome. However, no such model exists for predicting HTLV-1 VISs. In this study, we have developed DeepHTLV: the first deep neural network for accurate detection of HTLV-1 insertion sites. We focused on 1) accurately predicting HTLV-1 VISs by extracting and generating superior feature representations and 2) uncovering the cis-regulatory features surrounding the insertion sites. DeepHTLV was implemented as a deep convolutional neural network (CNN) with self-attention architecture after comparing with several other deep neural network structures. To improve model accuracy, we trained the model using a bootstrap balanced sampling method with 10-fold CV. Furthermore, we demonstrated that this model has higher accuracy than several traditional machine learning models, with a modest improvement in area under the curve (AUC) values by 3-10%. To study the cis-regulatory features around HTLV-1 insertion sites, we extracted informative motifs from convolutional layer. Clustering of these motifs yielded eight unique consensus sequence motifs that represented potential integration sites in humans. The informative motif sequences were matched with a known transcription factor (TF) binding profile database, JASPAR2020, with the sequence matching tool TOMTOM. 79 TFs associations were enriched in regions surrounding HTLV-1 VISs. Furthermore, literature screening of HTLV-1, ATL, and HAM/TSP validated nearly half (34) of the predicted TFs interactions. This work demonstrates that DeepHTLV can accurately identify HTLV-1 VISs, elucidate surrounding features regulating these insertion sites, and make biologically meaningful predictions about cis-regulatory elements surrounding the insertion sites

    Reformulating aircraft routing algorithms to reduce fuel burn and thus CO2 emissions

    Get PDF
    During the UN Climate Change Conference (COP26), in November 2021, the international aviation community agreed to advance actions to reduce CO2 emissions. Adopting more fuel efficient routes, now that full global satellite coverage is available, could achieve this quickly and economically. Here flights between New York and London, from 1st December, 2019 to 29th February, 2020 are considered. Trajectories through wind fields from a global atmospheric re-analysis dataset are found using optimal control theory. Initially, time minimal routes are obtained by applying Pontryagin’s Minimum Principle. Minimum time air distances are compared with actual Air Traffic Management tracks. Potential air distance savings range from 0.7 to 16.4%, depending on direction and track efficiency. To gauge the potential for longer duration time minimal round trips in the future, due to climate change, trajectories are considered for historic and future time periods, using an ensemble of climate models. Next, fixed-time, fuel-minimal routes are sought. Fuel consumption is modelled with a new physics-driven fuel burn function, which is aircraft model specific. Control variables of position-dependent aircraft headings and airspeeds or just headings are used. The importance of airspeed in finding trajectories is established, by comparing fuel burn found from a global search of optimised results for the discretised approximation of each formulation. Finally, dynamic programming is applied to find free-time, fuel-optimal routes. Results show that significant fuel reductions are possible, compared with estimates of fuel use from actual flights, without significant changes to flight duration. Fuel use for winter 2019–2020 could have been reduced by 4.6% eastbound and 3.9% westbound on flights between Heathrow and John F Kennedy Airports. This equates to a 16.6 million kg reduction in CO2 emissions. Thus large reductions in fuel consumption and emissions are possible immediately, without waiting decades for incremental improvements in fuel-efficiency through technological advances
    • …
    corecore