59,429 research outputs found
INDEMICS: An Interactive High-Performance Computing Framework for Data Intensive Epidemic Modeling
We describe the design and prototype implementation of Indemics (_Interactive; Epi_demic; _Simulation;)—a modeling environment utilizing high-performance computing technologies for supporting complex epidemic simulations. Indemics can support policy analysts and epidemiologists interested in planning and control of pandemics. Indemics goes beyond traditional epidemic simulations by providing a simple and powerful way to represent and analyze policy-based as well as individual-based adaptive interventions. Users can also stop the simulation at any point, assess the state of the simulated system, and add additional interventions. Indemics is available to end-users via a web-based interface.
Detailed performance analysis shows that Indemics greatly enhances the capability and productivity of simulating complex intervention strategies with a marginal decrease in performance. We also demonstrate how Indemics was applied in some real case studies where complex interventions were implemented
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Usage of Network Simulators in Machine-Learning-Assisted 5G/6G Networks
Without any doubt, Machine Learning (ML) will be an important driver of
future communications due to its foreseen performance when applied to complex
problems. However, the application of ML to networking systems raises concerns
among network operators and other stakeholders, especially regarding
trustworthiness and reliability. In this paper, we devise the role of network
simulators for bridging the gap between ML and communications systems. In
particular, we present an architectural integration of simulators in ML-aware
networks for training, testing, and validating ML models before being applied
to the operative network. Moreover, we provide insights on the main challenges
resulting from this integration, and then give hints discussing how they can be
overcome. Finally, we illustrate the integration of network simulators into
ML-assisted communications through a proof-of-concept testbed implementation of
a residential Wi-Fi network
Bit error performance of diffuse indoor optical wireless channel pulse position modulation system employing artificial neural networks for channel equalisation
The bit-error rate (BER) performance of a pulse position modulation (PPM) scheme for non-line-of-sight indoor optical links employing channel equalisation based on the artificial neural network (ANN) is reported. Channel equalisation is achieved by training a multilayer perceptrons ANN. A comparative study of the unequalised `soft' decision decoding and the `hard' decision decoding along with the neural equalised `soft' decision decoding is presented for different bit resolutions for optical channels with different delay spread. We show that the unequalised `hard' decision decoding performs the worst for all values of normalised delayed spread, becoming impractical beyond a normalised delayed spread of 0.6. However, `soft' decision decoding with/without equalisation displays relatively improved performance for all values of the delay spread. The study shows that for a highly diffuse channel, the signal-to-noise ratio requirement to achieve a BER of 10−5 for the ANN-based equaliser is ~10 dB lower compared with the unequalised `soft' decoding for 16-PPM at a data rate of 155 Mbps. Our results indicate that for all range of delay spread, neural network equalisation is an effective tool of mitigating the inter-symbol interference
GeNN: a code generation framework for accelerated brain simulations
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ.
GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials,
Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/
- …