13 research outputs found

    A Distributed Multilevel Force-directed Algorithm

    Full text link
    The wide availability of powerful and inexpensive cloud computing services naturally motivates the study of distributed graph layout algorithms, able to scale to very large graphs. Nowadays, to process Big Data, companies are increasingly relying on PaaS infrastructures rather than buying and maintaining complex and expensive hardware. So far, only a few examples of basic force-directed algorithms that work in a distributed environment have been described. Instead, the design of a distributed multilevel force-directed algorithm is a much more challenging task, not yet addressed. We present the first multilevel force-directed algorithm based on a distributed vertex-centric paradigm, and its implementation on Giraph, a popular platform for distributed graph algorithms. Experiments show the effectiveness and the scalability of the approach. Using an inexpensive cloud computing service of Amazon, we draw graphs with ten million edges in about 60 minutes.Comment: Appears in the Proceedings of the 24th International Symposium on Graph Drawing and Network Visualization (GD 2016

    PERANCANGAN REPOSITORY DENGAN DUKUNGAN OPEN ARCHIEVE INITIATIVE (OAI) BERBASIS OPEN SOURCE MENGGUNAKAN CODEIGNITER DAN NODE.JS

    Get PDF
    Repository menjadi sarana yang baik untuk mempublikasikan hasil penelitian pada lingkup yang lebih luas. Hal ini diharapkan dapat meningkatkan reputasi yang baik untuk referensi dari para penulis dalam pengembangan ilmu pengetahuan. Repository merupakan sebuah perangkat lunak open source yang dapat digunakan sebagai arsip serta dapat menyimpan gambar, data penelitian dan suara dalam bentuk digital. Tersebarnya repository seperti; e-prints, d-space, fedora, greenstone digital library, ganesa digital library (GDL) dan SLIMs tetapi masih kurang banyak digunakan oleh Perguruan Tinggi di Indonesia dikarenakan hal teknis spesifikasi serta kesulitan masalah biaya dan sumber daya. Penelitian ini mencoba untuk membangun model repository yang dapat digunakan sebagai bentuk manajemen publikasi dan penerbitan serta memiliki fitur Open Archieve Initiative (OAI) dan penanganan metadata agar dapat secara baik ter-index pada indexing internasional dan bersifat open source. Secara garis besar penelitian ini dibagi dalam tiga tahapan, yaitu pengumpulan data pra pengembangan, pengembangan serta implementasi, dan pengumpulan data pasca pengembangan. Pengumpulan data pra pengembangan dimaksudkan untuk mendapatkan bekal studi pendahuluan tentang inti masalah yang sedang dihadapi, sedangkan tahap pengembangan dan implementasi berfokus pada memodelkan perancangan perangkat lunak ke dalam diagram dan membuat kode pemrograman untuk mengimplementasikan perancangan yang telah dibuat. Sedangkan tahapan pengumpulan data pasca pengembangan adalah untuk pembenahan aplikasi yang dibuat, penarikan kesimpulan, dan saran untuk topik penelitian selanjutnya. Dari hasil penelitian maka dapat disimpulkan bahwa repository dibangun dengan framework codeigniter, node.js dan menggunakan bahasa pemograman pendukung seperti HTML, CSS, Jquery, Java Script, JSON, AJAX, Boostrap sebagai media dalam perancangan antar muka. Sedangkan PHP sebagai server side dan MySQL sebagai database. Aplikasi repository ini diberi nama dengan ―T-REPOSITORY‖ yang dapat di unduh pada sosial coding github.Kata kunci : Repository; Open Archieve Initiative (OAI); Open source, Codeigniter, Node.j

    Opportunities and challenges in partitioning the graph measure space of real-world networks

    Get PDF
    Based on a large dataset containing thousands of real-world networks ranging from genetic, protein interaction, and metabolic networks to brain, language, ecology, and social networks we search for defining structural measures of the different complex network domains (CND). We calculate 208 measures for all networks and using a comprehensive and scrupulous workflow of statistical and machine learning methods we investigated the limitations and possibilities of identifying the key graph measures of CNDs. Our approach managed to identify well distinguishable groups of network domains and confer their relevant features. These features turn out to be CND specific and not unique even at the level of individual CNDs. The presented methodology may be applied to other similar scenarios involving highly unbalanced and skewed datasets

    ELRUNA: Elimination Rule-based Network Alignment

    Get PDF
    Networks model a variety of complex phenomena across different domains. In many applications, one of the most essential tasks is to align two or more networks to infer the similarities between cross-network vertices and discover potential node-level correspondence. In this thesis, we propose ELRUNA (Elimination rule-based network alignment), a novel network alignment algorithm that relies exclusively on the underlying graph structure. Under the guidance of the elimination rules that we define, ELRUNA computes the similarity between a pair of cross-network vertices iteratively by accumulating the similarities between their selected neighbors. The resulting cross-network similarity matrix is then used to infer a permutation matrix that encodes the final alignment of cross-network vertices. In addition to the novel alignment algorithm, we also improve the performance of local search, a commonly used post-processing step for solving the network alignment problem, by introducing a novel selection method RAWSEM (Random-walk based selection method) based on the propagation of the levels of mismatching (dened in the thesis) of vertices across the networks. The key idea is to pass on the initial levels of mismatching of vertices throughout the entire network in a random-walk fashion. Through extensive numerical experiments on real networks, we demonstrate that ELRUNA significantly outperforms the state-of-the-art alignment methods in terms of alignment accuracy under lower or comparable running time. Moreover, ELRUNA is robust to network perturbations such that it can maintain a close to optimal objective value under a high level of noise added to the original networks. Finally, the proposed RAWSEM can further improve the alignment quality with a less number of iterations compared with the naive local search method

    Virtualization techniques for memory resource exploitation

    Get PDF
    Cloud infrastructures have become indispensable in our daily lives with the rise of cloud-based services offered by companies like Facebook, Google, Amazon and many others. These cloud infrastructures use a large numbers of servers provisioned with their own computing resources. Each of these servers use a piece of software, called the Hypervisor (``HV''), that allows them to create multiple virtual instances of the server's physical computing resources and abstract them into "Virtual Machines'' (VMs). A VM runs an Operating System, which in turn runs the applications. The VMs within the servers generate varying memory demand behavior. When the demand increases, costly operations such as (virtual) disk accesses and/or VM migrations can occur. As a result, it is necessary to optimize the utilization of the local memory resources within a single computing server. However, pressure on the memory resources can still increase, making it necessary to migrate the VM to a different server with larger memory or add more memory to the same server. At this point, it is important to consider that some of the servers in the cloud infrastructure might have memory resources that they are not using. Considering the possibility to make memory available to the server, new architectures have been introduced that provide hardware support to enable servers to share their memory capacity. This thesis presents multiple contributions to the memory management problem. First, it addresses the problem of optimizing memory resources in a virtualized server through different types of memory abstractions. Two full contributions are presented for managing memory within a single server called SmarTmem and CARLEMM. In this respect, a third contribution is also presented, called CAVMem, that works as the foundation for CARLEMM. Second, this thesis presents two contributions for memory capacity aggregation across multiple servers, offering two mechanisms called GV-Tmem and vMCA, this latter being based on GV-Tmem but with significant enhancements. These mechanisms distribute the server's total memory within a single-server and globally across computing servers using a user-space process with high-level memory management policies.Las infraestructuras para la nube se han vuelto indispensables en nuestras vidas diarias con la proliferación de los servicios ofrecidos por compañías como Facebook, Google, Amazon entre otras. Estas infraestructuras utilizan una gran cantidad de servidores proveídos con sus propios recursos computacionales. Cada unos de estos servidores utilizan un software, llamado el Hipervisor (“HV”), que les permite crear múltiples instancias virtuales de los recursos físicos de computación del servidor y abstraerlos en “Máquinas Virtuales” (VMs). Una VM ejecuta un Sistema Operativo (OS), el cual a su vez ejecuta aplicaciones. Las VMs dentro de los servidores generan un comportamiento variable de demanda de memoria. Cuando la demanda de memoria aumenta, operaciones costosas como accesos al disco (virtual) y/o migraciones de VMs pueden ocurrir. Como resultado, es necesario optimizar la utilización de los recursos de memoria locales dentro del servidor. Sin embargo, la demanda por memoria puede seguir aumentando, haciendo necesario que la VM migre a otro servidor o que se añada más memoria al servidor. En este punto, es importante considerar que algunos servidores podrían tener recursos de memoria que no están utilizando. Considerando la posibilidad de hacer más memoria disponible a los servidores que lo necesitan, nuevas arquitecturas de servidores han sido introducidos que brindan el soporte de hardware necesario para habilitar que los servidores puedan compartir su capacidad de memoria. Esta tesis presenta múltiples contribuciones para el problema de manejo de memoria. Primero, se enfoca en el problema de optimizar los recursos de memoria en un servidor virtualizado a través de distintos tipos de abstracciones de memoria. Dos contribuciones son presentadas para administrar memoria de manera automática dentro de un servidor virtualizado, llamadas SmarTmem y CARLEMM. En este contexto, una tercera contribución es presentada, llamada CAVMem, que proporciona los fundamentos para el desarrollo de CARLEMM. Segundo, la tesis presenta dos contribuciones enfocadas en la agregación de capacidad de memoria a través de múltiples servidores, ofreciendo dos mecanismos llamados GV-Tmem y vMCA, siendo este último basado en GV-Tmem pero con mejoras significativas. Estos mecanismos administran la memoria total de un servidor a nivel local y de manera global a lo largo de los servidores de la infraestructura de nube utilizando un proceso de usuario que implementa políticas de manejo de ..
    corecore