290,132 research outputs found
Measures of scalability
Scalable frames are frames with the property that the frame vectors can be
rescaled resulting in tight frames. However, if a frame is not scalable, one
has to aim for an approximate procedure. For this, in this paper we introduce
three novel quantitative measures of the closeness to scalability for frames in
finite dimensional real Euclidean spaces. Besides the natural measure of
scalability given by the distance of a frame to the set of scalable frames,
another measure is obtained by optimizing a quadratic functional, while the
third is given by the volume of the ellipsoid of minimal volume containing the
symmetrized frame. After proving that these measures are equivalent in a
certain sense, we establish bounds on the probability of a randomly selected
frame to be scalable. In the process, we also derive new necessary and
sufficient conditions for a frame to be scalable.Comment: 27 pages, 5 figure
Scalability using effects
This note is about using computational effects for scalability. With this
method, the specification gets more and more complex while its semantics gets
more and more correct. We show, from two fundamental examples, that it is
possible to design a deduction system for a specification involving an effect
without expliciting this effect
PKI Scalability Issues
This report surveys different PKI technologies such as PKIX and SPKI and the
issues of PKI that affect scalability. Much focus is spent on certificate
revocation methodologies and status verification systems such as CRLs,
Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation,
OCSP, SCVP and DVCS.Comment: 23 pages, 2 figure
Scalability of Hydrodynamic Simulations
Many hydrodynamic processes can be studied in a way that is scalable over a
vastly relevant physical parameter space. We systematically examine this
scalability, which has so far only briefly discussed in astrophysical
literature. We show how the scalability is limited by various constraints
imposed by physical processes and initial conditions. Using supernova remnants
in different environments and evolutionary phases as application examples, we
demonstrate the use of the scaling as a powerful tool to explore the
interdependence among relevant parameters, based on a minimum set of
simulations. In particular, we devise a scaling scheme that can be used to
adaptively generate numerous seed remnants and plant them into 3D hydrodynamic
simulations of the supernova-dominated interstellar medium.Comment: 12 pages, 1 figure, submitted to MNRAS; comments are welcom
Scalability and Performance of Microservices Architectures.
Annotation- The inevitability of continuous evolution and seamless integration of dynamic alterations remains a paramount consideration in the realm of software engineering This concern is particularly pronounced within the context of contemporary microservices architectures embedded in heterogeneous and decentralized systems composed of numerous interdependent components A pivotal focal point within such a software design paradigm is to sustain optimal performance quality by ensuring harmonious collaboration among autonomous facets within an intricate framework The challenge of microservices evolution has predominantly revolved around upholding the harmonization of diverse microservices versions during updates all while curbing the computational overhead associated with such validation This study leverages previous research outcomes and tackles the evolution predicament by introducing an innovative formal model coupled with a fresh exposition of microservices RESTful APIs The amalgamation of Formal Concept Analysis and the Liskov Substitution Principle plays a pivotal role in this proposed solution A series of compatibility constraints is delineated and subjected to validation through a controlled experiment employing a representative microservices syste
On statistics, computation and scalability
How should statistical procedures be designed so as to be scalable
computationally to the massive datasets that are increasingly the norm? When
coupled with the requirement that an answer to an inferential question be
delivered within a certain time budget, this question has significant
repercussions for the field of statistics. With the goal of identifying
"time-data tradeoffs," we investigate some of the statistical consequences of
computational perspectives on scability, in particular divide-and-conquer
methodology and hierarchies of convex relaxations.Comment: Published in at http://dx.doi.org/10.3150/12-BEJSP17 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Scalable Persistent Storage for Erlang
The many core revolution makes scalability a key property. The RELEASE project aims to improve the scalability of Erlang on emergent commodity architectures with 100,000 cores. Such architectures require scalable and available persistent storage on up to 100 hosts. We enumerate the requirements for scalable and available persistent storage, and evaluate four popular Erlang DBMSs against these requirements. This analysis shows that Mnesia and CouchDB are not suitable persistent storage at our target scale, but Dynamo-like NoSQL DataBase Management Systems (DBMSs) such as Cassandra and Riak potentially are. We investigate the current scalability limits of the Riak 1.1.1 NoSQL DBMS in practice on a 100-node cluster. We establish for the first time scientifically the scalability limit of Riak as 60 nodes on the Kalkyl cluster, thereby confirming developer folklore. We show that resources like memory, disk, and network do not limit the scalability of Riak. By instrumenting Erlang/OTP and Riak libraries we identify a specific Riak functionality that limits scalability. We outline how later releases of Riak are refactored to eliminate the scalability bottlenecks. We conclude that Dynamo-style NoSQL DBMSs provide scalable and available persistent storage for Erlang in general, and for our RELEASE target architecture in particular
- …