204 research outputs found
Recursive circulants and their embeddings among hypercubes
AbstractWe propose an interconnection structure for multicomputer networks, called recursive circulant. Recursive circulant G(N,d) is defined to be a circulant graph with N nodes and jumps of powers of d. G(N,d) is node symmetric, and has some strong hamiltonian properties. G(N,d) has a recursive structure when N=cdm, 1ā©½c<d. We develop a shortest-path routing algorithm in G(cdm,d), and analyze various network metrics of G(cdm,d) such as connectivity, diameter, mean internode distance, and visit ratio. G(2m,4), whose degree is m, compares favorably to the hypercube Qm. G(2m,4) has the maximum possible connectivity, and its diameter is ā(3mā1)/4ā. Recursive circulants have interesting relationship with hypercubes in terms of embedding. We present expansion one embeddings among recursive circulants and hypercubes, and analyze the costs associated with each embedding. The earlier version of this paper appeared in Park and Chwa (Proc. Internat. Symp. Parallel Architectures, Algorithms and Networks ISPANā94, Kanazawa, Japan, December 1994, pp. 73ā80)
DeMMon Decentralized Management and Monitoring Framework
The centralized model proposed by the Cloud computing paradigm mismatches the decentralized
nature of mobile and IoT applications, given the fact that most of the data
production and consumption is performed by end-user devices outside of the Data Center
(DC). As the number of these devices grows, and given the need to transport data to and
from DCs for computation, application providers incur additional infrastructure costs,
and end-users incur delays when performing operations.
These reasons have led us into a post-cloud era, where a new computing paradigm
arose: Edge Computing. Edge Computing takes into account the broad spectrum of
devices residing outside of the DC, closer to the clients, as potential targets for computations,
potentially reducing infrastructure costs, improving the quality of service (QoS)
for end-users and allowing new interaction paradigms between users and applications.
Managing and monitoring the execution of these devices raises new challenges previously
unaddressed by Cloud computing, given the scale of these systems and the devicesā
(potentially) unreliable data connections and heterogenous computational power. The
study of the state-of-the-art has revealed that existing resource monitoring and management
solutions require manual configuration and have centralized components, which
we believe do not scale for larger-scale systems.
In this work, we address these limitations by presenting a novel Decentralized Management
and Monitoring (āDeMMonā) system, targeted for edge settings. DeMMon provides
primitives to ease the development of tools that manage computational resources
that support edge-enabled applications, decomposed in components, through decentralized
actions, taking advantage of partial knowledge of the system. Our solution was
evaluated to amount to its benefits regarding information dissemination and monitoring
capabilities across a set of realistic emulated scenarios of up to 750 nodes with variable
failure rates. The results show the validity of our approach and that it can outperform
state-of-the-art solutions regarding scalability and reliabilityO modelo centralizado de computaĆ§Ć£o utilizado no paradigma da ComputaĆ§Ć£o na Nuvem
apresenta limitaƧƵes no contexto de aplicaƧƵes no domĆnio da Internet das Coisas
e aplicaƧƵes mĆ³veis. Neste tipo de aplicaƧƵes, os dados sĆ£o produzidos e consumidos
maioritariamente por dispositivos que se encontram na periferia da rede. Desta forma,
transportar estes dados de e para os centros de dados impƵe uma carga excessiva nas
infraestruturas de rede que ligam os dispositivos aos centros de dados, aumentando a
latĆŖncia de respostas e diminuindo a qualidade de serviƧo para os utilizadores.
Para combater estas limitaƧƵes, surgiu o paradigma da ComputaĆ§Ć£o na Periferia, este
paradigma propƵe a execuĆ§Ć£o de computaƧƵes, e potencialmente armazenamento de
dados, em dispositivos fora dos centros de dados, mais perto dos clientes, reduzindo
custos e criando um novo leque de possibilidades para efetuar computaƧƵes distribuĆdas
mais prĆ³ximas dos dispositivos que produzem e consomem os dados.
Contudo, gerir e supervisionar a execuĆ§Ć£o desses dispositivos levanta obstĆ”culos nĆ£o
equacionados pela ComputaĆ§Ć£o na Nuvem, como a escala destes sistemas, ou a variabilidade
na conectividade e na capacidade de computaĆ§Ć£o dos dispositivos que os compƵem.
O estudo da literatura revela que ferramentas populares para gerir e supervisionar aplicaƧƵes
e dispositivos possuem limitaƧƵes para a sua escalabilidade, como por exemplo,
pontos de falha centralizados, ou requerem a configuraĆ§Ć£o manual de cada dispositivo.
Nesta dissertaĆ§Ć£o, propƵem-se uma nova soluĆ§Ć£o de monitorizaĆ§Ć£o e disseminaĆ§Ć£o
de informaĆ§Ć£o descentralizada. Esta soluĆ§Ć£o oferece operaƧƵes que permitem recolher
informaĆ§Ć£o sobre o estado do sistema, de modo a ser utilizada por soluƧƵes (tambĆ©m
descentralizadas) que gerem aplicaƧƵes especializadas para executar na periferia da rede.
A nossa soluĆ§Ć£o foi avaliada em redes emuladas de vĆ”rias dimensƵes com um mĆ”ximo
de 750 nĆ³s, no contexto de disseminaĆ§Ć£o e de monitorizaĆ§Ć£o de informaĆ§Ć£o. Os nossos
resultados mostram que o nosso sistema consegue ser mais robusto ao mesmo tempo que
Ʃ mais escalƔvel quando comparado com o estado da arte
Recommended from our members
Reliability and fault tolerance modelling of multiprocessor systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Reliability evaluation by analytic modelling constitute an important issue of designing a reliable multiprocessor system. In this thesis, a model for reliability and fault tolerance analysis of the interconnection network is presented, based on graph theory. Reliability and fault tolerance are considered as deterministic and probabilistic measures of connectivity.
Exact techniques for reliability evaluation fail for large multiprocessor systems because of the enormous computational resources required. Therefore, approximation techniques have to be used. Three approaches are proposed, the first by simplifying the symbolic expression of reliability; the
other two by applying a hierarchical decomposition to the system. All these
methods give results close to those obtained by exact techniques.Consejo Nacional de Ciencia y Tecnologia" (National Council for Science and Technology of Mexico) and "Instituto de Investigaciones Electricas" (Institute for Electrical Research
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems
Properties and algorithms of the hyper-star graph and its related graphs
The hyper-star interconnection network was proposed in 2002 to overcome the
drawbacks of the hypercube and its variations concerning the network cost, which is
defined by the product of the degree and the diameter. Some properties of the graph
such as connectivity, symmetry properties, embedding properties have been studied
by other researchers, routing and broadcasting algorithms have also been designed.
This thesis studies the hyper-star graph from both the topological and algorithmic
point of view. For the topological properties, we try to establish relationships between
hyper-star graphs with other known graphs. We also give a formal equation for the
surface area of the graph. Another topological property we are interested in is the
Hamiltonicity problem of this graph.
For the algorithms, we design an all-port broadcasting algorithm and a single-port
neighbourhood broadcasting algorithm for the regular form of the hyper-star graphs.
These algorithms are both optimal time-wise.
Furthermore, we prove that the folded hyper-star, a variation of the hyper-star, to be
maixmally fault-tolerant
Properties and algorithms of the (n, k)-star graphs
The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative
to the n-star topology in parallel computation. The (n, k )-star has significant
advantages over the n-star which itself was proposed as an attractive alternative to
the popular hypercube. The major advantage of the (n, k )-star network is its scalability,
which makes it more flexible than the n-star as an interconnection network. In
this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as
well as developing parallel algorithms that run on this network.
The basic topological properties of the (n, k )-star are first studied. These are
useful since they can be used to develop efficient algorithms on this network. We then
study the (n, k )-star network from algorithmic point of view. Specifically, we will
investigate both fundamental and application algorithms for basic communication,
prefix computation, and sorting, etc.
A literature review of the state-of-the-art in relation to the (n, k )-star network as
well as some open problems in this area are also provided
Approximate logic circuits: Theory and applications
CMOS technology scaling, the process of shrinking transistor dimensions based
on Moore's law, has been the thrust behind increasingly powerful integrated circuits
for over half a century. As dimensions are scaled to few tens of nanometers, process
and environmental variations can significantly alter transistor characteristics, thus
degrading reliability and reducing performance gains in CMOS designs with technology
scaling. Although design solutions proposed in recent years to improve reliability
of CMOS designs are power-efficient, the performance penalty associated with these
solutions further reduces performance gains with technology scaling, and hence these
solutions are not well-suited for high-performance designs.
This thesis proposes approximate logic circuits as a new logic synthesis paradigm
for reliable, high-performance computing systems. Given a specification, an approximate
logic circuit is functionally equivalent to the given specification for a "significant"
portion of the input space, but has a smaller delay and power as compared to a
circuit implementation of the original specification. This contributions of this thesis
include (i) a general theory of approximation and efficient algorithms for automated
synthesis of approximations for unrestricted random logic circuits, (ii) logic design solutions
based on approximate circuits to improve reliability of designs with negligible
performance penalty, and (iii) efficient decomposition algorithms based on approxiiii
mate circuits to improve performance of designs during logic synthesis. This thesis
concludes with other potential applications of approximate circuits and identifies. open
problems in logic decomposition and approximate circuit synthesis
- ā¦