54 research outputs found
High speed event-based visual processing in the presence of noise
Standard machine vision approaches are challenged in applications where large amounts of noisy temporal data must be processed in real-time. This work aims to develop neuromorphic event-based processing systems for such challenging, high-noise environments. The novel event-based application-focused algorithms developed are primarily designed for implementation in digital neuromorphic hardware with a focus on noise robustness, ease of implementation, operationally useful ancillary signals and processing speed in embedded systems
Neuromorphic Engineering Editors' Pick 2021
This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors
Forecasting: theory and practice
Forecasting has always been in the forefront of decision making and planning.
The uncertainty that surrounds the future is both exciting and challenging,
with individuals and organisations seeking to minimise risks and maximise
utilities. The lack of a free-lunch theorem implies the need for a diverse set
of forecasting methods to tackle an array of applications. This unique article
provides a non-systematic review of the theory and the practice of forecasting.
We offer a wide range of theoretical, state-of-the-art models, methods,
principles, and approaches to prepare, produce, organise, and evaluate
forecasts. We then demonstrate how such theoretical concepts are applied in a
variety of real-life contexts, including operations, economics, finance,
energy, environment, and social good. We do not claim that this review is an
exhaustive list of methods and applications. The list was compiled based on the
expertise and interests of the authors. However, we wish that our encyclopedic
presentation will offer a point of reference for the rich work that has been
undertaken over the last decades, with some key insights for the future of the
forecasting theory and practice
Forecasting: theory and practice
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts.
We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.info:eu-repo/semantics/publishedVersio
Evolutionary games on graphs
Game theory is one of the key paradigms behind many scientific disciplines
from biology to behavioral sciences to economics. In its evolutionary form and
especially when the interacting agents are linked in a specific social network
the underlying solution concepts and methods are very similar to those applied
in non-equilibrium statistical physics. This review gives a tutorial-type
overview of the field for physicists. The first three sections introduce the
necessary background in classical and evolutionary game theory from the basic
definitions to the most important results. The fourth section surveys the
topological complications implied by non-mean-field-type social network
structures in general. The last three sections discuss in detail the dynamic
behavior of three prominent classes of models: the Prisoner's Dilemma, the
Rock-Scissors-Paper game, and Competing Associations. The major theme of the
review is in what sense and how the graph structure of interactions can modify
and enrich the picture of long term behavioral patterns emerging in
evolutionary games.Comment: Review, final version, 133 pages, 65 figure
Forecasting: theory and practice
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases
The Utility of Adaptive Designs in Publicly Funded Confirmatory Trials
Introduction: Adaptive designs (ADs) are underused, particularly in publicly funded confirmatory trials, despite their promising benefits and methodological prominence given in the statistical literature.
Research Question: This thesis investigates why ADs are underused in the publicly funded setting, explores facilitators, and proposes recommendations to improve their appropriate use.
Methods: Confirmatory ADs are reviewed from a statistical and practical perspective. Cross-disciplinary key stakeholders are then interviewed to explore roadblocks to the use of ADs. Based on the interview findings, follow-up quantitative surveys are undertaken to explore wider perceptions on barriers, concerns, and facilitators aimed to generalise the findings. The surveys targeted CTUs (Clinical Trials Units), private sector organisations, and Public Funders in the UK. In view of some of the findings, case studies of applied confirmatory ADs are reviewed to highlight their scope and characteristic, and to investigate the state of reporting of the most common AD. The design and implementation of selected ADs is demonstrated using retrospective and prospective planned case studies. Lessons learned are highlighted to enhance the design of future trials of similar characteristics.
Results: The main barriers to the use of ADs include the lack of funding support accessible to UK CTUs to aid their design; limited practical knowledge; preference for traditional mainstream designs; difficulties in marketing ADs to key stakeholders; limited time to support ADs relative to other competing priorities; lack of applied training; and insufficient access to case studies of undertaken ADs, which would facilitate practical learning and successful implementation. Researchers’ inadequate description of AD-related aspects (such as rationale, scope, and decision-making criteria to guide the planned AD) in grant proposals was viewed among the major obstacles by Public Funders. Suboptimal reporting of the design and conduct of undertaken ADs appears to influence concerns about their robustness in decision-making and credibility to change practice.
Conclusions: Most obstacles appear connected to a lack of practical implementation knowledge and applied training, and limited access to adequately reported case studies to facilitate practical learning. Assurance of scientific rigour through transparent adequate reporting is paramount to the credibility of findings from adaptive trials. There is a need for a consensus guidance document on ADs and an AD-tailored CONSORT statement to enhance their reporting and conduct. This thesis provides detailed recommendations to improve the appropriate use of ADs and areas for future related research
Recommended from our members
Image processing methods to segment speech spectrograms for word level recognition
The ultimate goal of automatic speech recognition (ASR) research is to allow a computer to recognize speech in real-time, with full accuracy, independent of vocabulary size, noise, speaker characteristics or accent. Today, systems are trained to learn an individual speaker's voice and larger vocabularies statistically, but accuracy is not ideal. A small gap between actual speech and acoustic speech representation in the statistical mapping causes a failure to produce a match of the acoustic speech signals by Hidden Markov Model (HMM) methods and consequently leads to classification errors. Certainly, these errors in the low level recognition stage of ASR produce unavoidable errors at the higher levels. Therefore, it seems that ASR additional research ideas to be incorporated within current speech recognition systems. This study seeks new perspective on speech recognition. It incorporates a new approach for speech recognition, supporting it with wider previous research, validating it with a lexicon of 533 words and integrating it with a current speech recognition method to overcome the existing limitations. The study focusses on applying image processing to speech spectrogram images (SSI). We, thus develop a new writing system, which we call the Speech-Image Recogniser Code (SIR-CODE). The SIR-CODE refers to the transposition of the speech signal to an artificial domain (the SSI) that allows the classification of the speech signal into segments. The SIR-CODE allows the matching of all speech features (formants, power spectrum, duration, cues of articulation places, etc.) in one process. This was made possible by adding a Realization Layer (RL) on top of the traditional speech recognition layer (based on HMM) to check all sequential phones of a word in single step matching process. The study shows that the method gives better recognition results than HMMs alone, leading to accurate and reliable ASR in noisy environments. Therefore, the addition of the RL for SSI matching is a highly promising solution to compensate for the failure of HMMs in low level recognition. In addition, the same concept of employing SSIs can be used for whole sentences to reduce classification errors in HMM based high level recognition. The SIR-CODE bridges the gap between theory and practice of phoneme recognition by matching the SSI patterns at the word level. Thus, it can be adapted for dynamic time warping on the SIR-CODE segments, which can help to achieve ASR, based on SSI matching alone
Neuromorphic Models of the Amygdala with Applications to Spike Based Computing and Robotics
Computational neural simulations do not match the functionality and operation of the brain processes they attempt to model. This gap exists due to both our incomplete understanding of brain function and the technological limitations of computers. Moreover, given that the shrinking of transistors has reached its physical limit, fundamentally different computer paradigms are needed to help bridge this gap. Neuromorphic hardware technologies attempt to abstract the form of brain function to provide a computational solution post-Moore’s Law, and neuromorphic algorithms provide software frameworks to increase biological plausibility within neural models. This dissertation focuses on utilizing neuromorphic frameworks to better understand how the brain processes social and emotional stimuli. It describes the creation of a spiking-neuron computational model of the amygdala, the brain region behind our social interactions, and the simulation of the model using brain-inspired computer hardware, as well as the implementations of other spike-based computations on these hardwares. Although scientists agree that the amygdala is the main component of the social brain, few models exist to explain amygdala function beyond “fight or flight”. This model incorporates neuroscientists’ more nuanced understanding of the amygdala, and is validated by comparing the neural responses measured from the model to responses measured in primate amygdalae under the same experimental conditions. This model will inform future physiological experiments, which will generate deeper neuroscientific insights, which will in turn allow for better neural models. Repeated iteratively, this positive feedback loop in which better models beget better under- standing of biology and vice versa will help close the gap between the computer and the brain. The computer networks and hardware that emerge from this process have the potential to achieve higher computing efficiency, approaching or perhaps surpassing the efficiency of the human brain; provide the foundation for new approaches to artificial intelligence and machine learning within a spike-based computing paradigm; and widen our understanding of brain function
- …