7,440 research outputs found
Single-cell time-series analysis of metabolic rhythms in yeast
The yeast metabolic cycle (YMC) is a biological rhythm in budding yeast (Saccharomyces cerevisiae). It entails oscillations in the concentrations and redox states of intracellular metabolites, oscillations in transcript levels, temporal partitioning of biosynthesis, and, in chemostats, oscillations in oxygen consumption. Most studies on the YMC have been based on chemostat experiments, and it is unclear whether YMCs arise from interactions between cells or are generated independently by each cell. This thesis aims at characterising the YMC in single cells and its response to nutrient and genetic perturbations. Specifically, I use microfluidics to trap and separate yeast cells, then record the time-dependent intensity of flavin autofluorescence, which is a component of the YMC.
Single-cell microfluidics produces a large amount of time series data. Noisy and short time series produced from biological experiments restrict the computational tools that are useful for analysis. I developed a method to filter time series, a machine learning model to classify whether time series are oscillatory, and an autocorrelation method to examine the periodicity of time series data.
My experimental results show that yeast cells show oscillations in the fluorescence of flavins. Specifically, I show that in high glucose conditions, cells generate flavin oscillations asynchronously within a population, and these flavin oscillations couple with the cell division cycle. I show that cells can individually reset the phase of their flavin oscillations in response to abrupt nutrient changes, independently of the cell division cycle. I also show that deletion strains generate flavin oscillations that exhibit different behaviour from dissolved oxygen oscillations from chemostat conditions.
Finally, I use flux balance analysis to address whether proteomic constraints in cellular metabolism mean that temporal partitioning of biosynthesis is advantageous for the yeast cell, and whether such partitioning explains the timing of the metabolic cycle. My results show that under proteomic constraints, it is advantageous for the cell to sequentially synthesise biomass components because doing so shortens the timescale of biomass synthesis. However, the degree of advantage of sequential over parallel biosynthesis is lower when both carbon and nitrogen sources are limiting.
This thesis thus confirms autonomous generation of flavin oscillations, and suggests a model in which the YMC responds to nutrient conditions and subsequently entrains the cell division cycle. It also emphasises the possibility that subpopulations in the culture explain chemostat-based observations of the YMC. Furthermore, this thesis paves the way for using computational methods to analyse large datasets of oscillatory time series, which is useful for various fields of study beyond the YMC
Recommended from our members
Serial Biasing Technique for Rapid Single Flux Quantum Circuits
Superconductor electronics based on the Single Flux Quantum (SFQ) technology are considered a strong contender for the ‘beyond CMOS’ future of digital circuits because of the high speed and low power dissipation associated with them. In fact, digital operations beyond tens of GHz have been routinely demonstrated in the SFQ technology. These circuits have widespread applications such as high-speed analog-to-digital conversion, digital signal processing, high speed computing and in emerging topics such as control circuitry for superconducting quantum computing.
Rapid Single Flux Quantum (RSFQ) circuits have emerged as a promising candidate within the SFQ technology, with information encoded in picosecond wide, milli-volt voltage pulses. As is the case with any integrated circuit technology, scalability of RSFQ circuits is essential to realizing their applications. These circuits, based on the Josephson junction, require a DC bias current for the correct operation. The DC bias current requirement increases with circuit complexity, and this has multiple implications on circuit operation. Large currents produce magnetic fields that can interfere with logic operation. Furthermore, the heat load delivered to the superconducting chip also increases with current which could result in the circuit becoming ‘normal’ and not superconducting. These problems make reduction of the bias current necessary.
Serial Biasing (SB) is a bias current reduction technique, that has been proposed in the past. In this technique, a digital circuit is partitioned into multiple identical islands and bias current is provided to each island in a serial manner. While this scheme is promising, there are multiple challenges such as design of the driver-receiver pair circuit resulting in robust and wide operating bias margins, current management on the floating islands, etc.
This thesis investigates SB in a systematic manner, focusing on the design and measurement of the fundamental components of this technique with an emphasis on reliability and scalability. It presents works on circuit techniques achieving high speed serially biased RSFQ circuits with robust operating margins and the experimental evidence to support the ideas. It develops a framework for serial biasing that could be used by electronic design tools to automate design and synthesis of complex RSFQ circuits. It also investigates Passive Transmission Lines (PTLs) for use as passive interconnects between library cells in a complex design, reducing the DC bias current required by the active circuitry
2023-2024 Boise State University Undergraduate Catalog
This catalog is primarily for and directed at students. However, it serves many audiences, such as high school counselors, academic advisors, and the public. In this catalog you will find an overview of Boise State University and information on admission, registration, grades, tuition and fees, financial aid, housing, student services, and other important policies and procedures. However, most of this catalog is devoted to describing the various programs and courses offered at Boise State
Brain Computations and Connectivity [2nd edition]
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations.
Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed.
The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes.
Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions.
This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press.
Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
Performance modelling for scalable deep learning
Performance modelling for scalable deep learning is very important to quantify the
efficiency of large parallel workloads. Performance models are used to obtain run-time
estimates by modelling various aspects of an application on a target system. Designing
performance models requires comprehensive analysis in order to build accurate models.
Limitations of current performance models include poor explainability in the computation
time of the internal processes of a neural network model and limited applicability to
particular architectures.
Existing performance models in deep learning have been proposed, which are broadly
categorized into two methodologies: analytical modelling and empirical modelling. Analytical
modelling utilizes a transparent approach that involves converting the internal
mechanisms of the model or applications into a mathematical model that corresponds to
the goals of the system. Empirical modelling predicts outcomes based on observation and
experimentation, characterizes algorithm performance using sample data, and is a good alternative
to analytical modelling. However, both these approaches have limitations, such
as poor explainability in the computation time of the internal processes of a neural network
model and poor generalisation. To address these issues, hybridization of the analytical and
empirical approaches has been applied, leading to the development of a novel generic performance
model that provides a general expression of a deep neural network framework
in a distributed environment, allowing for accurate performance analysis and prediction.
The contributions can be summarized as follows:
In the initial study, a comprehensive literature review led to the development of a performance
model based on synchronous stochastic gradient descent (S-SGD) for analysing
the execution time performance of deep learning frameworks in a multi-GPU environment.
This model’s evaluation involved three deep learning models (Convolutional Neural Networks (CNN), Autoencoder (AE), and Multilayer Perceptron (MLP)), implemented in three popular deep learning frameworks (MXNet, Chainer, and TensorFlow) respectively, with a focus on following an analytical approach. Additionally, a generic expression for the performance model was formulated, considering intrinsic parameters and extrinsic scaling factors that impact computing time in a distributed environment. This formulation involved a global optimization problem with a cost function dependent on unknown constants within the generic expression. Differential evolution was utilized to identify the best fitting values, matching experimentally determined computation times. Furthermore, to enhance the accuracy and stability of the performance model, regularization techniques were applied. Lastly, the proposed generic performance model underwent experimental evaluation in a real-world application. The results of this evaluation provided valuable insights into the influence of hyperparameters on performance, demonstrating the robustness and applicability of the performance model in understanding and optimizing model behavior
Robust, Energy-Efficient, and Scalable Indoor Localization with Ultra-Wideband Technology
Ultra-wideband (UWB) technology has been rediscovered in recent years for its potential to provide centimeter-level accuracy in GNSS-denied environments. The large-scale adoption of UWB chipsets in smartphones brings demanding needs on the energy-efficiency, robustness, scalability, and crossdevice compatibility of UWB localization systems. This thesis investigates, characterizes, and proposes several solutions for these pressing concerns. First, we investigate the impact of different UWB device architectures on the energy efficiency, accuracy, and cross-platform compatibility of UWB localization systems. The thesis provides the first comprehensive comparison between the two types of physical interfaces (PHYs) defined in the IEEE 802.15.4 standard: with low and high pulse repetition frequency (LRP and HRP, respectively). In the comparison, we focus not only on the ranging/localization accuracy but also on the energy efficiency of the PHYs. We found that the LRP PHY consumes between 6.4–100 times less energy than the HRP PHY in the evaluated devices. On the other hand, distance measurements acquired with the HRP devices had 1.23–2 times lower standard deviation than those acquired with the LRP devices. Therefore, the HRP PHY might be more suitable for applications with high-accuracy constraints than the LRP PHY.
The impact of different UWB PHYs also extends to the application layer. We found that ranging or localization error-mitigation techniques are frequently trained and tested on only one device and would likely not generalize to different platforms. To this end, we identified four challenges in developing platform-independent error-mitigation techniques in UWB localization, which can guide future research in this direction.
Besides the cross-platform compatibility, localization error-mitigation techniques raise another concern: most of them rely on extensive data sets for training and testing. Such data sets are difficult and expensive to collect and often representative only of the precise environment they were collected in. We propose a method to detect and mitigate non-line-of-sight (NLOS) measurements that does not require any manually-collected data sets. Instead, the proposed method automatically labels incoming distance measurements based on their distance residuals during the localization process. The proposed detection and mitigation method reduces, on average, the mean and standard deviation of localization errors by 2.2 and 5.8 times, respectively.
UWB and Bluetooth Low Energy (BLE) are frequently integrated in localization solutions since they can provide complementary functionalities: BLE is more energy-efficient than UWB but it can provide location estimates with only meter-level accuracy. On the other hand, UWB can localize targets with centimeter-level accuracy albeit with higher energy consumption than BLE. In this thesis, we provide a comprehensive study of the sources of instabilities in received signal strength (RSS) measurements acquired with BLE devices. The study can be used as a starting point for future research into BLE-based ranging techniques, as well as a benchmark for hybrid UWB–BLE localization systems.
Finally, we propose a flexible scheduling scheme for time-difference of arrival (TDOA) localization with UWB devices. Unlike in previous approaches, the reference anchor and the order of the responding anchors changes every time slot. The flexible anchor allocation makes the system more robust to NLOS propagation than traditional approaches. In the proposed setup, the user device is a passive listener which localizes itself using messages received from the anchors. Therefore, the system can scale with an unlimited number of devices and can preserve the location privacy of the user. The proposed method is implemented on custom hardware using a commercial UWB chipset. We evaluated the proposed method against the standard TDOA algorithm and range-based localization. In line of sight (LOS), the proposed TDOA method has a localization accuracy similar to the standard TDOA algorithm, down to a 95% localization error of 15.9 cm. In NLOS, the proposed TDOA method outperforms the classic TDOA method in all scenarios, with a reduction of up to 16.4 cm in the localization error.Cotutelle -yhteisväitöskirj
Core interface optimization for multi-core neuromorphic processors
Hardware implementations of Spiking Neural Networks (SNNs) represent a
promising approach to edge-computing for applications that require low-power
and low-latency, and which cannot resort to external cloud-based computing
services. However, most solutions proposed so far either support only
relatively small networks, or take up significant hardware resources, to
implement large networks. To realize large-scale and scalable SNNs it is
necessary to develop an efficient asynchronous communication and routing fabric
that enables the design of multi-core architectures. In particular the core
interface that manages inter-core spike communication is a crucial component as
it represents the bottleneck of Power-Performance-Area (PPA) especially for the
arbitration architecture and the routing memory. In this paper we present an
arbitration mechanism with the corresponding asynchronous encoding pipeline
circuits, based on hierarchical arbiter trees. The proposed scheme reduces the
latency by more than 70% in sparse-event mode, compared to the state-of-the-art
arbitration architectures, with lower area cost. The routing memory makes use
of asynchronous Content Addressable Memory (CAM) with Current Sensing
Completion Detection (CSCD), which saves approximately 46% energy, and achieves
a 40% increase in throughput against conventional asynchronous CAM using
configurable delay lines, at the cost of only a slight increase in area. In
addition as it radically reduces the core interface resources in multi-core
neuromorphic processors, the arbitration architecture and CAM architecture we
propose can be also applied to a wide range of general asynchronous circuits
and systems
Edge Video Analytics: A Survey on Applications, Systems and Enabling Techniques
Video, as a key driver in the global explosion of digital information, can
create tremendous benefits for human society. Governments and enterprises are
deploying innumerable cameras for a variety of applications, e.g., law
enforcement, emergency management, traffic control, and security surveillance,
all facilitated by video analytics (VA). This trend is spurred by the rapid
advancement of deep learning (DL), which enables more precise models for object
classification, detection, and tracking. Meanwhile, with the proliferation of
Internet-connected devices, massive amounts of data are generated daily,
overwhelming the cloud. Edge computing, an emerging paradigm that moves
workloads and services from the network core to the network edge, has been
widely recognized as a promising solution. The resulting new intersection, edge
video analytics (EVA), begins to attract widespread attention. Nevertheless,
only a few loosely-related surveys exist on this topic. The basic concepts of
EVA (e.g., definition, architectures) were not fully elucidated due to the
rapid development of this domain. To fill these gaps, we provide a
comprehensive survey of the recent efforts on EVA. In this paper, we first
review the fundamentals of edge computing, followed by an overview of VA. The
EVA system and its enabling techniques are discussed next. In addition, we
introduce prevalent frameworks and datasets to aid future researchers in the
development of EVA systems. Finally, we discuss existing challenges and foresee
future research directions. We believe this survey will help readers comprehend
the relationship between VA and edge computing, and spark new ideas on EVA.Comment: 31 pages, 13 figure
Consistency vs. Availability in Distributed Real-Time Systems
In distributed applications, Brewer's CAP theorem tells us that when networks
become partitioned (P), one must give up either consistency (C) or availability
(A). Consistency is agreement on the values of shared variables; availability
is the ability to respond to reads and writes accessing those shared variables.
Availability is a real-time property whereas consistency is a logical property.
We have extended the CAP theorem to relate quantitative measures of these two
properties to quantitative measures of communication and computation latency
(L), obtaining a relation called the CAL theorem that is linear in a max-plus
algebra. This paper shows how to use the CAL theorem in various ways to help
design real-time systems. We develop a methodology for systematically trading
off availability and consistency in application-specific ways and to guide the
system designer when putting functionality in end devices, in edge computers,
or in the cloud. We build on the Lingua Franca coordination language to provide
system designers with concrete analysis and design tools to make the required
tradeoffs in deployable software.Comment: 12 pages. arXiv admin note: text overlap with arXiv:2109.0777
- …