408,371 research outputs found
Clifford analysis between continuous and discrete
Some decades ago D. Knuth et al. have coined concrete mathematics as the blending of CONtinuous and
disCRETE math, taking into account that problems of standard discrete mathematics can often be solved by
methods based on continuous mathematics together with a controlled manipulation of mathematical formulas.
Of course, it was not a new idea, but due to the ongoing emergence of computer aided algebraic manipulation
tools of that time it emphasized their use for elegant solutions of old problems or even the detection of new
important relationships. Our aim is to show that the same philosophy can be successfully applied to Clifford
Analysis by taking advantages of its inherent non-commutative algebra to obtain results or develop methods
that are di erent from other ones. In particular, we determine new binomial sums by using a hypercomplex
generating function for a special type of monogenic polynomials and develop an algorithm for the determination
of their scalar and vector part which illustrates well the diifferences to the corresponding complex case.The research of the first author was partially supported by the R&D Unit Matemdtica e Aplicagoes (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT)
Simulation
Welcome to this graduate course on Discrete-Event Simulation, a hybrid
discipline that combines knowledge and techniques from Operations
Research (OR) and Computer Science (CS) (Figure 1). Due to the fast and
continuous improvements in computer hardware and software, Simulation
has become an emergent research area with practical industrial and services
applications. Today, most real-world systems are too complex to be modeled
and studied by using analytical methods. Instead, numerical methods such as
simulation must be employed in order to study the performance of those
systems, to gain insight into their internal behavior and to consider
alternative (“what-if”) scenarios. Applications of Simulations are widely
spread among different knowledge areas, including the performance analysis
of computer and telecommunication systems or the optimization of
manufacturing and logistics processes. This course introduces concepts and
methods for designing, performing and analyzing experiments conducted
using a Simulation approach. Among other concepts, this course discusses the
proper collection and modeling of input data and system randomness,
the generation of random variables to emulate the behavior of the real
system, the verification and validation of models, and the analysis of the
experimental outputs.
FigurePeer ReviewedPostprint (published version
Mathematical Modeling of Metabolic-Genetic Networks
Systems biology deals with the computational and mathematical modeling of complex
biological systems. The aim is to understand the big picture of the system’s dynamics
rather than the individual parts by integrating different sciences, e.g., mathematics, physics,
biology, computer science, and engineering. In biological systems, mathematical models
of biochemical networks are necessary for predicting and optimizing the behavior of cells
in culture. Different mathematical models have been discussed, such as discrete models,
continuous models, and hybrid models. In a discrete model, the biological system assumes
discrete values. A continuous model uses a system of differential equations to describe the
change of concentrations of substances in a cell over time. A hybrid model combines both
discrete and continuous models. The main challenge in continuous models is to find the
kinetic parameter values. In this thesis, we build a kinetic model of a metabolic-genetic
network introduced in Covert et al., 2001 that mimics a discrete model of regulatory flux
balance analysis (rFBA) which is based on steady-state assumptions. The kinetic model
we introduce has unknown parameters, so it is necessary to perform parameter estimation
techniques. We perform a parameter estimation technique using data sets generated from a
simulation of the rFBA model. In nature, many phenomena of interest are high-dimensional
and complex. Thus, model reduction is considered a vital topic in systems biology. Model
reduction methods are mathematical techniques that aim to represent a high-dimensional,
dynamical system by a low-dimensional system that roughly preserves the main features
and characteristics of the original system. The idea of model order reduction is to use the
reduced-order model instead of the full-order model in the simulation or optimization of the
system to reduce the computational effort and the runtime of the simulations. In this thesis,
we discuss two different model reduction methods. The first method assumes a time scale
separation, i.e., it assumes two time scales, a fast time scale and a slow time scale, where
the fast time scale dynamics converge to a quasi-steady state. The second approach, proper
orthogonal decomposition, aims at obtaining low-dimensional approximate descriptions of
high-dimensional processes while retaining the most important features of the dynamics. We
apply these approaches to different biological system models from the BioModels database
The Statistical Physics of Learning Revisited:Typical Learning Curves in Model Scenarios
The exchange of ideas between computer science and statistical physics has advanced the understanding of machine learning and inference significantly. This interdisciplinary approach is currently regaining momentum due to the revived interest in neural networks and deep learning. Methods borrowed from statistical mechanics complement other approaches to the theory of computational and statistical learning. In this brief review, we outline and illustrate some of the basic concepts. We exemplify the role of the statistical physics approach in terms of a particularly important contribution: the computation of typical learning curves in student teacher scenarios of supervised learning. Two, by now classical examples from the literature illustrate the approach: the learning of a linearly separable rule by a perceptron with continuous and with discrete weights, respectively. We address these prototypical problems in terms of the simplifying limit of stochastic training at high formal temperature and obtain the corresponding learning curves.</p
- …