276 research outputs found
A New Approach to Manage QoS in Distributed Multimedia Systems
Dealing with network congestion is a criterion used to enhance quality of
service (QoS) in distributed multimedia systems. The existing solutions for the
problem of network congestion ignore scalability considerations because they
maintain a separate classification for each video stream. In this paper, we
propose a new method allowing to control QoS provided to clients according to
the network congestion, by discarding some frames when needed. The technique
proposed, called (m,k)-frame, is scalable with little degradation in
application performances. (m,k)-frame method is issued from the notion of
(m,k)-firm realtime constraints which means that among k invocations of a task,
m invocations must meet their deadline. Our simulation studies show the
usefulness of (m,k)-frame method to adapt the QoS to the real conditions in a
multimedia application, according to the current system load. Notably, the
system must adjust the QoS provided to active clients1 when their number
varies, i.e. dynamic arrival of clients.Comment: 10 pages, International Journal of Computer Science and Information
Security (IJCSIS
Метод оценивания качества информационных сервисов в корпоративной сети
Рассматриваются вопросы оценивания качества сервисов в корпоративной сети. Показана связь методов оценивания качества с показателями состояния сервисов, предложено обобщение метода оценивания для случаев однородных и неоднородных сервисов. Правильность полученных решений подтверждается на примерах.It is considered evaluation qualities of services questions in a corporate network. It is sown the relation of qualities evaluating methods with state parameters of services, the generalization of the evaluation method for cases of homogeneous and heterogeneous services. Correctness of the received solutions is proved by examples
Interval Neutrosophic Sets and Logic: Theory and Applications in Computing
A neutrosophic set is a part of neutrosophy that studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. The neutrosophic set is a powerful general formal framework that has been recently proposed. However, the neutrosophic set needs to be specified from a technical point of view. Here, we define the set-theoretic operators on an instance of a neutrosophic set, and call it an Interval Neutrosophic Set (INS). We prove various properties of INS, which are connected to operations and relations over INS. We also introduce a new logic system based on interval neutrosophic sets. We study the interval neutrosophic propositional calculus and interval neutrosophic predicate calculus. We also create a neutrosophic logic inference system based on interval neutrosophic logic. Under the framework of the interval neutrosophic set, we propose a data model based on the special case of the interval neutrosophic sets called Neutrosophic Data Model. This data model is the extension of fuzzy data model and paraconsistent data model. We generalize the set-theoretic operators and relation-theoretic operators of fuzzy relations and paraconsistent relations to neutrosophic relations. We propose the generalized SQL query constructs and tuple-relational calculus for Neutrosophic Data Model. We also design an architecture of Semantic Web Services agent based on the interval neutrosophic logic and do the simulation study
Dagstuhl News January - December 2000
"Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic
Data Security and Privacy in the Cloud
Relying on the cloud for storing data and performing computations has become a popular solution in today\u2019s society, which demands large data collections and/or analysis over them to be readily available, for example, to make knowledge-based decisions. While bringing undeniable benefits to both data owners and end users accessing the outsourced data, moving to the cloud raises a number of issues, ranging from choosing the most suitable cloud provider for outsourcing to effectively protecting data and computation results. In this paper, we discuss the main issues related to data protection arising when data and/or computations over them are moved to the cloud. We also illustrate possible solutions and approaches for addressing such issues
Investigations into Elasticity in Cloud Computing
The pay-as-you-go model supported by existing cloud infrastructure providers
is appealing to most application service providers to deliver their
applications in the cloud. Within this context, elasticity of applications has
become one of the most important features in cloud computing. This elasticity
enables real-time acquisition/release of compute resources to meet application
performance demands. In this thesis we investigate the problem of delivering
cost-effective elasticity services for cloud applications.
Traditionally, the application level elasticity addresses the question of how
to scale applications up and down to meet their performance requirements, but
does not adequately address issues relating to minimising the costs of using
the service. With this current limitation in mind, we propose a scaling
approach that makes use of cost-aware criteria to detect the bottlenecks within
multi-tier cloud applications, and scale these applications only at bottleneck
tiers to reduce the costs incurred by consuming cloud infrastructure resources.
Our approach is generic for a wide class of multi-tier applications, and we
demonstrate its effectiveness by studying the behaviour of an example
electronic commerce site application.
Furthermore, we consider the characteristics of the algorithm for
implementing the business logic of cloud applications, and investigate the
elasticity at the algorithm level: when dealing with large-scale data under
resource and time constraints, the algorithm's output should be elastic with
respect to the resource consumed. We propose a novel framework to guide the
development of elastic algorithms that adapt to the available budget while
guaranteeing the quality of output result, e.g. prediction accuracy for
classification tasks, improves monotonically with the used budget.Comment: 211 pages, 27 tables, 75 figure
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
- …