1,120 research outputs found

    ANALISIS DAN PERANCANGAN SISTEM REPLIKASI DATABASE MYSQL DENGAN MENGGUNAKAN VMWARE PADA SISTEM OPERASI OPEN SOURCE

    Get PDF
    Replikasi adalah suatu teknik untuk melakukan copy dan pendistribusian data dan objek-objek database dari satu database ke database lain dan melaksanakan sinkronisasi antara database sehingga konsistensi data dapat terjamin. Replikasi database dapat digunakan apabila sebuah organisasi atau perusahaan didukung oleh hardware dan aplikasi software dalam sebuah sistem yang terdistribusi melalui koneksi jaringan lokal maupun internet. Aplikasi yang berbeda mempunyai kebutuhan yang berbeda untuk otonomi dan konsistensi data dan pengguna dapat bekerja dengan mengcopy data pada saat tidak terkoneksi kemudian melakukan perubahan untuk dibuat database baru pada saat terkoneksi. Dengan menggunakan Vmware workstation dan Ubuntu server 14.0 yang menyediakan fasilitas simulasi jaringan antar PC meski tidak terpasang Network card maupun hub atau switch. Dengan menggunakan Vmware wokstation dan Ubuntu server ini pula akan dibangun Sistem Replikasi Database.Dengan memanfaatkan sistem replikasi database tersebut setiap client ataupun user dapat saling menggunakan database yang direplikasi dari server, sehingga setiap perubahan dari sisi server maupun client akan dapat langsung tersimpan pada seluruh sistem

    Design and Implementation of Running Support System by Providing Common Routes of Runners

    Get PDF
    This paper proposes the design and implementation of running support system using mobile devices. As technology continues to develop extremely quickly, new technologies provide more convenience, but their evolution simultaneously includes many problems. One of these problems, about which people are becoming more concerned, is the lack of exercise to maintain his/her health. Cellphones and smartphones have already become more intelligent and indispensable, and mobile applications (APPs) are no longer novel. Based on the integrated, multi-functional, personal customization, and other unique advantages of applications, modern society requires an exercise application that includes interaction, competition, and quality communication to encourage people to do more exercising. Such applications must motivate people to communicate with each other easily and discuss the daily process of their exercising. Such mobile applications might resemble a new kind of fitness application software that assists people become healthier. First, this paper performs survey research on the demand for health in daily life and what kind of exercise applications people might use to become healthier by smartphones. Our survey includes both people in Japan and from other countries. Through survey management, considering some specific samples, and analyzing their characteristics and flaws, we easily summarized and improved our application software. Next, based on the research results, it was not hard to find identical parts about the concepts of other exercise application software; we also explain our new design concept APP. Our explanation includes an automatic algorithm that generates a common path and a determination algorithm, both of which generate new fitness applications. In general, through these two algorithms, a common path was generated. In other words, the common path is the main concept that concerns new sport applications built on mobile phones. Finally, based on our design concept and the above algorithms, we implement fitness applications on an iOS that we call Run-Map-APP. DOI: 10.17762/ijritcc2321-8169.15075

    Improving efficiency and resilience in large-scale computing systems through analytics and data-driven management

    Full text link
    Applications running in large-scale computing systems such as high performance computing (HPC) or cloud data centers are essential to many aspects of modern society, from weather forecasting to financial services. As the number and size of data centers increase with the growing computing demand, scalable and efficient management becomes crucial. However, data center management is a challenging task due to the complex interactions between applications, middleware, and hardware layers such as processors, network, and cooling units. This thesis claims that to improve robustness and efficiency of large-scale computing systems, significantly higher levels of automated support than what is available in today's systems are needed, and this automation should leverage the data continuously collected from various system layers. Towards this claim, we propose novel methodologies to automatically diagnose the root causes of performance and configuration problems and to improve efficiency through data-driven system management. We first propose a framework to diagnose software and hardware anomalies that cause undesired performance variations in large-scale computing systems. We show that by training machine learning models on resource usage and performance data collected from servers, our approach successfully diagnoses 98% of the injected anomalies at runtime in real-world HPC clusters with negligible computational overhead. We then introduce an analytics framework to address another major source of performance anomalies in cloud data centers: software misconfigurations. Our framework discovers and extracts configuration information from cloud instances such as containers or virtual machines. This is the first framework to provide comprehensive visibility into software configurations in multi-tenant cloud platforms, enabling systematic analysis for validating the correctness of software configurations. This thesis also contributes to the design of robust and efficient system management methods that leverage continuously monitored resource usage data. To improve performance under power constraints, we propose a workload- and cooling-aware power budgeting algorithm that distributes the available power among servers and cooling units in a data center, achieving up to 21% improvement in throughput per Watt compared to the state-of-the-art. Additionally, we design a network- and communication-aware HPC workload placement policy that reduces communication overhead by up to 30% in terms of hop-bytes compared to existing policies.2019-07-02T00:00:00

    Educational research in architecture: ICT tool for historical buildings evaluation

    Get PDF
    Historical buildings have been threatened by the vulnerability to degradation. This problem is not an exception in Portugal, where the few remaining examples of vernacular architectural wooden stilt-houses of river banks, have been neglected, with the disappearance or abandonment of almost all buildings. This research presents the results of an educational research in architecture, creating a software for evaluation of status of historical buildings conservation. It will present as a case study, the portuguese wooden stilt-house village of Lezirão, where this ICT (information and communication technology) platform was tested. This is one of the remaining five vernacular architectural villages of river banks that still exist in the country, nearby Tagus river. This research was developed in a PhD thesis in Architecture at the University of Beira Interior in Covilhã. This ICT platform is a pioneer approach in educational research in architecture, related on wooden stilt-houses evaluation in Portugal and it can be used in other similar buildings all over the world.info:eu-repo/semantics/publishedVersio

    Study of the development of an Io T-based sensor platform for e-agriculture

    Get PDF
    E-agriculture, sometimes reffered as 'ICT in agriculture' (Information and Communication technologies) or simply "smart agriculture", is a relatively recent and emerging field focused on the enhacement on agricultural and rural development through improved information and communication processes. This concept, involves the design, development, evaluation and application of innovative ways to use IoT technologies in the rural domain, with a primary focus on agriculture, in order to achieve better ways of growing food for the masses with sustainability. In IoT-based agriculture, platforms are built for monitoring the crop field with the help of sensors (light, humidity, temperature, soil moisture, etc.) and automating the irrigation system. The farmers can monitor the field conditions from anywhere and highly more efficient compared to conventional approaches

    Full-text ETD retrieval in library discovery system: designing a framework

    Get PDF
    This paper discusses designing an open source software based library discovery system for full-text ETD retrieval on the basis of a cataloguing framework developed by using available global standards and best practices in the domain of theses cataloguing. The purpose of this prototype framework is to provide a single-window search and retrieval system for end users for discovering ETD at metadata level and at full-text level. The prototype framework is based on three-layer architecture with Koha ILS as backend metadata provider, Apache-Tika as full-text extractor and VuFind as discovery system. A MARC-21 bibliographic format, especially designed to handle TDs, is working as data handler mechanism in Koha ILS and the harvester of VuFind is tuned to fetch bibliographic data related to ETD in marcxml format. The user interface of VuFind is also configured to support accessing ETDs from global-scale services like NDLTD, OATD, IndCat, ShodhGanga etc. apart from the local level ETD collection in order to provide an all-in-one search interface for users

    VISOR: virtual machine images management service for cloud infarestructures

    Get PDF
    Cloud Computing is a relatively novel paradigm that aims to fulfill the computing as utility dream. It has appeared to bring the possibility of providing computing resources (such as servers, storage and networks) as a service and on demand, making them accessible through common Internet protocols. Through cloud offers, users only need to pay for the amount of resources they need and for the time they use them. Virtualization is the clouds key technology, acting upon virtual machine images to deliver fully functional virtual machine instances. Therefore, virtual machine images play an important role in Cloud Computing and their efficient management becomes a key concern that should be carefully addressed. To tackle this requirement, most cloud offers provide their own image repository, where images are stored and retrieved from, in order to instantiate new virtual machines. However, the rise of Cloud Computing has brought new problems in managing large collections of images. Existing image repositories are not able to efficiently manage, store and catalogue virtual machine images from other clouds through the same centralized service repository. This becomes especially important when considering the management of multiple heterogeneous cloud offers. In fact, despite the hype around Cloud Computing, there are still existing barriers to its widespread adoption. Among them, clouds interoperability is one of the most notable issues. Interoperability limitations arise from the fact that current cloud offers provide proprietary interfaces, and their services are tied to their own requirements. Therefore, when dealing with multiple heterogeneous clouds, users face hard to manage integration and compatibility issues. The management and delivery of virtual machine images across different clouds is an example of such interoperability constraints. This dissertation presents VISOR, a cloud agnostic virtual machine images management service and repository. Our work towards VISOR aims to provide a service not designed to fit in a specific cloud offer but rather to overreach sharing and interoperability limitations among different clouds. With VISOR, the management of clouds interoperability can be seamlessly abstracted from the underlying procedures details. In this way, it aims to provide users with the ability to manage and expose virtual machine images across heterogeneous clouds, throughout the same generic and centralized repository and management service. VISOR is an open source software with a community-driven development process, thus it can be freely customized and further improved by everyone. The conducted tests to evaluate its performance and resources usage rate have shown VISOR as a stable and high performance service, even when compared with other services already in production. Lastly, placing clouds as the main target audience is not a limitation for other use cases. In fact, virtualization and virtual machine images are not exclusively linked to cloud environments. Therefore and given the service agnostic design concerns, it is possible to adapt it to other usage scenarios as well.A Computação em Nuvem (”Cloud Computing”) é um paradigma relativamente novo que visa cumprir o sonho de fornecer a computação como um serviço. O mesmo surgiu para possibilitar o fornecimento de recursos de computação (servidores, armazenamento e redes) como um serviço de acordo com as necessidades dos utilizadores, tornando-os acessíveis através de protocolos de Internet comuns. Através das ofertas de ”cloud”, os utilizadores apenas pagam pela quantidade de recursos que precisam e pelo tempo que os usam. A virtualização é a tecnologia chave das ”clouds”, atuando sobre imagens de máquinas virtuais de forma a gerar máquinas virtuais totalmente funcionais. Sendo assim, as imagens de máquinas virtuais desempenham um papel fundamental no ”Cloud Computing” e a sua gestão eficiente torna-se um requisito que deve ser cuidadosamente analisado. Para fazer face a tal necessidade, a maioria das ofertas de ”cloud” fornece o seu próprio repositório de imagens, onde as mesmas são armazenadas e de onde são copiadas a fim de criar novas máquinas virtuais. Contudo, com o crescimento do ”Cloud Computing” surgiram novos problemas na gestão de grandes conjuntos de imagens. Os repositórios existentes não são capazes de gerir, armazenar e catalogar images de máquinas virtuais de forma eficiente a partir de outras ”clouds”, mantendo um único repositório e serviço centralizado. Esta necessidade torna-se especialmente importante quando se considera a gestão de múltiplas ”clouds” heterogéneas. Na verdade, apesar da promoção extrema do ”Cloud Computing”, ainda existem barreiras à sua adoção generalizada. Entre elas, a interoperabilidade entre ”clouds” é um dos constrangimentos mais notáveis. As limitações de interoperabilidade surgem do fato de as ofertas de ”cloud” atuais possuírem interfaces proprietárias, e de os seus serviços estarem vinculados às suas próprias necessidades. Os utilizadores enfrentam assim problemas de compatibilidade e integração difíceis de gerir, ao lidar com ”clouds” de diferentes fornecedores. A gestão e disponibilização de imagens de máquinas virtuais entre diferentes ”clouds” é um exemplo de tais restrições de interoperabilidade. Esta dissertação apresenta o VISOR, o qual é um repositório e serviço de gestão de imagens de máquinas virtuais genérico. O nosso trabalho em torno do VISOR visa proporcionar um serviço que não foi concebido para lidar com uma ”cloud” específica, mas sim para superar as limitações de interoperabilidade entre ”clouds”. Com o VISOR, a gestão da interoperabilidade entre ”clouds” é abstraída dos detalhes subjacentes. Desta forma pretende-se proporcionar aos utilizadores a capacidade de gerir e expor imagens entre ”clouds” heterogéneas, mantendo um repositório e serviço de gestão centralizados. O VISOR é um software de código livre com um processo de desenvolvimento aberto. O mesmo pode ser livremente personalizado e melhorado por qualquer pessoa. Os testes realizados para avaliar o seu desempenho e a taxa de utilização de recursos mostraram o VISOR como sendo um serviço estável e de alto desempenho, mesmo quando comparado com outros serviços já em utilização. Por fim, colocar as ”clouds” como principal público-alvo não representa uma limitação para outros tipos de utilização. Na verdade, as imagens de máquinas virtuais e a virtualização não estão exclusivamente ligadas a ambientes de ”cloud”. Assim sendo, e tendo em conta as preocupações tidas no desenho de um serviço genérico, também é possível adaptar o nosso serviço a outros cenários de utilização

    Security comparison of ownCloud, Nextcloud, and Seafile in open source cloud storage solutions

    Get PDF
    Cloud storage has become one of the most efficient and economical ways to store data over the web. Although most organizations have adopted cloud storage, there are numerous privacy and security concerns about cloud storage and collaboration. Furthermore, adopting public cloud storage may be costly for many enterprises. An open-source cloud storage solution for cloud file sharing is a possible alternative in this instance. There is limited information on system architecture, security measures, and overall throughput consequences when selecting open-source cloud storage solutions despite widespread awareness. There are no comprehensive comparisons available to evaluate open-source cloud storage solutions (specifically owncloud, nextcloud, and seafile) and analyze the impact of platform selections. This thesis will present the concept of cloud storage, a comprehensive understanding of three popular open-source features, architecture, security features, vulnerabilities, and other angles in detail. The goal of the study is to conduct a comparison of these cloud solutions so that users may better understand the various open-source cloud storage solutions and make more knowledgeable selections. The author has focused on four attributes: features, architecture, security, and vulnerabilities of three cloud storage solutions ("ownCloud," "Nextcloud," and "Seafile") since most of the critical issues fall into one of these classifications. The findings show that, while the three services take slightly different approaches to confidentiality, integrity, and availability, they all achieve the same purpose. As a result of this research, the user will have a better understanding of the factors and will be able to make a more informed decision on cloud storage options

    The caCORE Software Development Kit: Streamlining construction of interoperable biomedical information services

    Get PDF
    BACKGROUND: Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. RESULTS: The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. CONCLUSION: The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development
    corecore