6,757 research outputs found

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    An investigation of entorhinal spatial representations in self-localisation behaviours

    Get PDF
    Spatial-modulated cells of the medial entorhinal cortex (MEC) and neighbouring cortices are thought to provide the neural substrate for self-localisation behaviours. These cells include grid cells of the MEC which are thought to compute path integration operations to update self-location estimates. In order to read this grid code, downstream cells are thought to reconstruct a positional estimate as a simple rate-coded representation of space. Here, I show the coding scheme of grid cell and putative readout cells recorded from mice performing a virtual reality (VR) linear location task which engaged mice in both beaconing and path integration behaviours. I found grid cells can encode two unique coding schemes on the linear track, namely a position code which reflects periodic grid fields anchored to salient features of the track and a distance code which reflects periodic grid fields without this anchoring. Grid cells were found to switch between these coding schemes within sessions. When grid cells were encoding position, mice performed better at trials that required path integration but not on trials that required beaconing. This result provides the first mechanistic evidence linking grid cell activity to path integration-dependent behaviour. Putative readout cells were found in the form of ramp cells which fire proportionally as a function of location in defined regions of the linear track. This ramping activity was found to be primarily explained by track position rather than other kinematic variables like speed and acceleration. These representations were found to be maintained across both trial types and outcomes indicating they likely result from recall of the track structure. Together, these results support the functional importance of grid and ramp cells for self-localisation behaviours. Future investigations will look into the coherence between these two neural populations, which may together form a complete neural system for coding and decoding self-location in the brain

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    GeschĂ€tzt mehr als 6.000 Erkrankungen werden durch VerĂ€nderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begĂŒnstigen. All diese Prozesse mĂŒssen ĂŒberprĂŒft werden, um die zum beschriebenen PhĂ€notyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer PathogenitĂ€t. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier prĂ€sentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells fĂŒr das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf AllelhĂ€ufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfĂŒgbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    An empirical investigation of the relationship between integration, dynamic capabilities and performance in supply chains

    Get PDF
    This research aimed to develop an empirical understanding of the relationships between integration, dynamic capabilities and performance in the supply chain domain, based on which, two conceptual frameworks were constructed to advance the field. The core motivation for the research was that, at the stage of writing the thesis, the combined relationship between the three concepts had not yet been examined, although their interrelationships have been studied individually. To achieve this aim, deductive and inductive reasoning logics were utilised to guide the qualitative study, which was undertaken via multiple case studies to investigate lines of enquiry that would address the research questions formulated. This is consistent with the author’s philosophical adoption of the ontology of relativism and the epistemology of constructionism, which was considered appropriate to address the research questions. Empirical data and evidence were collected, and various triangulation techniques were employed to ensure their credibility. Some key features of grounded theory coding techniques were drawn upon for data coding and analysis, generating two levels of findings. These revealed that whilst integration and dynamic capabilities were crucial in improving performance, the performance also informed the former. This reflects a cyclical and iterative approach rather than one purely based on linearity. Adopting a holistic approach towards the relationship was key in producing complementary strategies that can deliver sustainable supply chain performance. The research makes theoretical, methodological and practical contributions to the field of supply chain management. The theoretical contribution includes the development of two emerging conceptual frameworks at the micro and macro levels. The former provides greater specificity, as it allows meta-analytic evaluation of the three concepts and their dimensions, providing a detailed insight into their correlations. The latter gives a holistic view of their relationships and how they are connected, reflecting a middle-range theory that bridges theory and practice. The methodological contribution lies in presenting models that address gaps associated with the inconsistent use of terminologies in philosophical assumptions, and lack of rigor in deploying case study research methods. In terms of its practical contribution, this research offers insights that practitioners could adopt to enhance their performance. They can do so without necessarily having to forgo certain desired outcomes using targeted integrative strategies and drawing on their dynamic capabilities

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Making Connections: A Handbook for Effective Formal Mentoring Programs in Academia

    Get PDF
    This book, Making Connections: A Handbook for Effective Formal Mentoring Programs in Academia, makes a unique and needed contribution to the mentoring field as it focuses solely on mentoring in academia. This handbook is a collaborative institutional effort between Utah State University’s (USU) Empowering Teaching Open Access Book Series and the Mentoring Institute at the University of New Mexico (UNM). This book is available through (a) an e-book through Pressbooks, (b) a downloadable PDF version on USU’s Open Access Book Series website), and (c) a print version available for purchase on the USU Empower Teaching Open Access page, and on Amazon

    Voicing Kinship with Machines: Diffractive Empathetic Listening to Synthetic Voices in Performance.

    Get PDF
    This thesis contributes to the field of voice studies by analyzing the design and production of synthetic voices in performance. The work explores six case studies, consisting of different performative experiences of the last decade (2010- 2020) that featured synthetic voice design. It focusses on the political and social impact of synthetic voices, starting from yet challenging the concepts of voice in the machine and voice of the machine. The synthetic voices explored are often playing the role of simulated artificial intelligences, therefore this thesis expands its questions towards technology at large. The analysis of the case studies follows new materialist and posthumanist premises, yet it tries to confute the patriarchal and neoliberal approach towards technological development through feminist and de-colonial approaches, developing a taxonomy for synthetic voices in performance. Chapter 1 introduces terms and explains the taxonomy. Chapter 2 looks at familiar representations of fictional AI. Chapter 3 introduces headphone theatre exploring immersive practices. Chapters 4 and 5 engage with chatbots. Chapter 6 goes in depth exploring Human and Artificial Intelligence interaction, whereas chapter 7 moves slightly towards music production and live art. The body of the thesis includes the work of Pipeline Theatre, Rimini Protokoll, Annie Dorsen, Begüm Erciyas, and Holly Herndon. The analysis is informed by posthumanism, feminism, and performance studies, starting from my own practice as sound designer and singer, looking at aesthetics of reproduction, audience engagement, and voice composition. This thesis has been designed to inspire and provoke practitioners and scholars to explore synthetic voices further, question predominant biases of binarism and acknowledge their importance in redefining technology

    2023-2024 Boise State University Undergraduate Catalog

    Get PDF
    This catalog is primarily for and directed at students. However, it serves many audiences, such as high school counselors, academic advisors, and the public. In this catalog you will find an overview of Boise State University and information on admission, registration, grades, tuition and fees, financial aid, housing, student services, and other important policies and procedures. However, most of this catalog is devoted to describing the various programs and courses offered at Boise State
    • 

    corecore