1,008 research outputs found
Linearized Asymptotic Stability for Fractional Differential Equations
We prove the theorem of linearized asymptotic stability for fractional
differential equations. More precisely, we show that an equilibrium of a
nonlinear Caputo fractional differential equation is asymptotically stable if
its linearization at the equilibrium is asymptotically stable. As a consequence
we extend Lyapunov's first method to fractional differential equations by
proving that if the spectrum of the linearization is contained in the sector
\{\lambda \in \C : |\arg \lambda| > \frac{\alpha \pi}{2}\} where
denotes the order of the fractional differential equation, then the equilibrium
of the nonlinear fractional differential equation is asymptotically stable
Using Bad Learners to find Good Configurations
Finding the optimally performing configuration of a software system for a
given setting is often challenging. Recent approaches address this challenge by
learning performance models based on a sample set of configurations. However,
building an accurate performance model can be very expensive (and is often
infeasible in practice). The central insight of this paper is that exact
performance values (e.g. the response time of a software system) are not
required to rank configurations and to identify the optimal one. As shown by
our experiments, models that are cheap to learn but inaccurate (with respect to
the difference between actual and predicted performance) can still be used rank
configurations and hence find the optimal configuration. This novel
\emph{rank-based approach} allows us to significantly reduce the cost (in terms
of number of measurements of sample configuration) as well as the time required
to build models. We evaluate our approach with 21 scenarios based on 9 software
systems and demonstrate that our approach is beneficial in 16 scenarios; for
the remaining 5 scenarios, an accurate model can be built by using very few
samples anyway, without the need for a rank-based approach.Comment: 11 pages, 11 figure
Fiber-diffraction Interferometer using Coherent Fiber Optic Taper
We present a fiber-diffraction interferometer using a coherent fiber optic
taper for optical testing in an uncontrolled environment. We use a coherent
fiber optic taper and a single-mode fiber having thermally-expanded core. Part
of the measurement wave coming from a test target is condensed through a fiber
optic taper and spatially filtered from a single-mode fiber to be reference
wave. Vibration of the cavity between the target and the interferometer probe
is common to both reference and measurement waves, thus the interference fringe
is stabilized in an optical way. Generation of the reference wave is stable
even with the target movement. Focus shift of the input measurement wave is
desensitized by a coherent fiber optic taper
The ARGUS Vertex Trigger
A fast second level trigger has been developed for the ARGUS experiment which
recognizes tracks originating from the interaction region. The processor
compares the hits in the ARGUS Micro Vertex Drift Chamber to 245760 masks
stored in random access memories. The masks which are fully defined in three
dimensions are able to reject tracks originating in the wall of the narrow
beampipe of 10.5\,mm radius.Comment: gzipped Postscript, 27 page
Stochastic Model of Protein–Protein Interaction: Why Signaling Proteins Need to Be Colocalized
Colocalization of proteins that are part of the same signal transduction pathway via compartmentalization, scaffold, or anchor proteins is an essential aspect of the signal transduction system in eukaryotic cells. If interaction must occur via free diffusion, then the spatial separation between the sources of the two interacting proteins and their degradation rates become primary determinants of the time required for interaction. To understand the role of such colocalization, we create a mathematical model of the diffusion based protein–protein interaction process. We assume that mRNAs, which serve as the sources of these proteins, are located at different positions in the cytoplasm. For large cells such as Drosophila oocytes we show that if the source mRNAs were at random locations in the cell rather than colocalized, the average rate of interactions would be extremely small, which suggests that localization is needed to facilitate protein interactions and not just to prevent cross-talk between different signaling modules
Optimal sequential fingerprinting: Wald vs. Tardos
We study sequential collusion-resistant fingerprinting, where the
fingerprinting code is generated in advance but accusations may be made between
rounds, and show that in this setting both the dynamic Tardos scheme and
schemes building upon Wald's sequential probability ratio test (SPRT) are
asymptotically optimal. We further compare these two approaches to sequential
fingerprinting, highlighting differences between the two schemes. Based on
these differences, we argue that Wald's scheme should in general be preferred
over the dynamic Tardos scheme, even though both schemes have their merits. As
a side result, we derive an optimal sequential group testing method for the
classical model, which can easily be generalized to different group testing
models.Comment: 12 pages, 10 figure
Transcriptomic profiling of primary alveolar epithelial cell differentiation in human and rat
AbstractCell-type specific gene regulation is a key to gaining a full understanding of how the distinct phenotypes of differentiated cells are achieved and maintained. Here we examined how changes in transcriptional activation during alveolar epithelial cell (AEC) differentiation determine phenotype. We performed transcriptomic profiling using in vitro differentiation of human and rat primary AEC. This model recapitulates in vitro an in vivo process in which AEC transition from alveolar type 2 (AT2) cells to alveolar type 1 (AT1) cells during normal maintenance and regeneration following lung injury. Here we describe in detail the quality control, preprocessing, and normalization of microarray data presented within the associated study (Marconett et al., 2013). We also include R code for reproducibility of the referenced data and easily accessible processed data tables
Hybrid Algorithms Based on Integer Programming for the Search of Prioritized Test Data in Software Product Lines
In Software Product Lines (SPLs) it is not possible, in general, to test all products of the family. The number of products denoted by a SPL is very high due to the combinatorial explosion of features. For this reason, some coverage criteria have been proposed which try to test at least all feature interactions without the necessity to test all products, e.g., all pairs of features (pairwise coverage). In addition, it is desirable to first test products composed by a set of priority features. This problem is known as the Prioritized Pairwise Test Data Generation Problem. In this work we propose two hybrid algorithms using Integer Programming (IP) to generate a prioritized test suite. The first one is based on an integer linear formulation and the second one is based on a integer quadratic (nonlinear) formulation. We compare these techniques with two state-of-the-art algorithms, the Parallel Prioritized Genetic Solver (PPGS) and a greedy algorithm called prioritized-ICPL. Our study reveals that our hybrid nonlinear approach is clearly the best in both, solution quality and computation time. Moreover, the nonlinear variant (the fastest one) is 27 and 42 times faster than PPGS in the two groups of instances analyzed in this work.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Partially funded by the Spanish Ministry of Economy and Competitiveness and FEDER under contract TIN2014-57341-R, the University of Málaga, Andalucía Tech and the Spanish Network TIN2015-71841-REDT (SEBASENet)
Various Sizes of Sliding Event Bursts in the Plastic Flow of Metallic Glasses Based on a Spatiotemporal Dynamic Model
published_or_final_versio
Decision and function problems based on boson sampling
Boson sampling is a mathematical problem that is strongly believed to be
intractable for classical computers, whereas passive linear interferometers can
produce samples efficiently. So far, the problem remains a computational
curiosity, and the possible usefulness of boson-sampling devices is mainly
limited to the proof of quantum supremacy. The purpose of this work is to
investigate whether boson sampling can be used as a resource of decision and
function problems that are computationally hard, and may thus have
cryptographic applications. After the definition of a rather general
theoretical framework for the design of such problems, we discuss their
solution by means of a brute-force numerical approach, as well as by means of
non-boson samplers. Moreover, we estimate the sample sizes required for their
solution by passive linear interferometers, and it is shown that they are
independent of the size of the Hilbert space.Comment: Close to the version published in PR
- …