420 research outputs found

    Development of business process management on system backups with Veeam backup and replications as solution for SMBs (Small and midsize business)

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementIn a fast and volatile world, where countless amount of data flows continuously and the information multiplies instantly, small, and large companies run a successful business by keeping, organizing and managing the available data, which is not only extremely valuable in the decision-making process, but it is also safe recoverable storage. The fact that this flow of information is valuable in certain circumstances poses a risk on some key details regarding the proper care and management of backups and storage of information, especially when unforeseen hazardous events occur in the company. Nowadays, loss of data, stored for a long time, is frequent in big enterprises, but it also happens commonly daily to individuals. This issue can result from an oversight, including some type of computer sabotage, flooding, malicious software, theft of devices, or even the inclusion of own employees. Losing information declines and compromises business, causing considerable misfortunes of both, resources, and unplanned processes. Every company is required to have means that guarantee the continuity of their services without causing a risk to their organic structure. As it is known, the technology continues to develop, taking huge strides, providing solutions as well as processes to prevent such issues. The implementation of a backup system (Backups) suggested in this work project provides the necessary tools to safeguard and implement effectively Veeam Backup Replication, within the possibilities of the company. However, the main purpose of this project is to conduct a thorough analysis from the point of view of implementation and solutions but also the perspective of business process management backup (BPM), using it appropriately and approaching the backups to process the future technological solutions, as well as other aspects to be learned further.Em um mundo rápido e volátil, onde a quantidade incontável de dados flui continuamente e a informação se multiplica instantaneamente, pequenas e grandes empresas conduzem um negócio de sucesso mantendo, organizando e gerenciando os dados disponíveis, o que não é apenas extremamente valioso na tomada de decisões de processo, mas também é um armazenamento seguro e recuperável. O fato de esse fluxo de informações ser valioso em determinadas circunstâncias representa um risco em alguns detalhes importantes sobre o cuidado e gerenciamento adequados de backups e armazenamento de informações, especialmente quando ocorrem eventos perigosos imprevistos na empresa. Hoje em dia, a perda de dados, armazenados há muito tempo, é frequente nas grandes empresas, mas também acontece no dia a dia das pessoas. Esse problema pode ser decorrente de descuido, incluindo algum tipo de sabotagem de computador, inundação, software malicioso, roubo de dispositivo ou até mesmo a inclusão de funcionários próprios. A perda de informações diminui e compromete os negócios, causando problemas consideráveis de recursos e processos não planejados. Toda empresa é obrigada a ter meios que garantam a continuidade de seus serviços sem causar riscos à sua estrutura orgânica. Como se sabe, a tecnologia continua se desenvolvendo, dando passos largos, fornecendo soluções e processos para prevenir tais problemas. A implementação de um sistema de backup (Backups) sugerida neste projeto de trabalho fornece as ferramentas necessárias para salvaguardar e efetivamente implementar o Veeam Backup Replication, dentro das possibilidades da empresa. No entanto, o principal objetivo deste projeto é realizar uma análise aprofundada do ponto de vista de implementação e soluções, mas também da perspectiva de backup de gestão de processos de negócios (BPM), utilizando-o de forma adequada e direcionando backups para processar o futuro soluções tecnológicas, bem como outros aspectos a serem aprendidos

    Adaptive Dispatching of Tasks in the Cloud

    Full text link
    The increasingly wide application of Cloud Computing enables the consolidation of tens of thousands of applications in shared infrastructures. Thus, meeting the quality of service requirements of so many diverse applications in such shared resource environments has become a real challenge, especially since the characteristics and workload of applications differ widely and may change over time. This paper presents an experimental system that can exploit a variety of online quality of service aware adaptive task allocation schemes, and three such schemes are designed and compared. These are a measurement driven algorithm that uses reinforcement learning, secondly a "sensible" allocation algorithm that assigns jobs to sub-systems that are observed to provide a lower response time, and then an algorithm that splits the job arrival stream into sub-streams at rates computed from the hosts' processing capabilities. All of these schemes are compared via measurements among themselves and with a simple round-robin scheduler, on two experimental test-beds with homogeneous and heterogeneous hosts having different processing capacities.Comment: 10 pages, 9 figure

    On Evaluating Commercial Cloud Services: A Systematic Review

    Full text link
    Background: Cloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. Aim: To facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. Method: Based on a conceptual evaluation model comprising six steps, the Systematic Literature Review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. Results: This SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially represent the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. Conclusions: Evaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., the Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g., compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and reveals new Evidence-Based Software Engineering (EBSE) lessons

    Methodologies for the Automatic Location of Academic and Educational Texts on the Internet

    Get PDF
    Traditionally online databases of web resources have been compiled by a human editor, or though the submissions of authors or interested parties. Considerable resources are needed to maintain a constant level of input and relevance in the face of increasing material quantity and quality, and much of what is in databases is of an ephemeral nature. These pressures dictate that many databases stagnate after an initial period of enthusiastic data entry. The solution to this problem would seem to be the automatic harvesting of resources, however, this process necessitates the automatic classification of resources as ‘appropriate’ to a given database, a problem only solved by complex text content analysis. This paper outlines the component methodologies necessary to construct such an automated harvesting system, including a number of novel approaches. In particular this paper looks at the specific problems of automatically identifying academic research work and Higher Education pedagogic materials. Where appropriate, experimental data is presented from searches in the field of Geography as well as the Earth and Environmental Sciences. In addition, appropriate software is reviewed where it exists, and future directions are outlined

    Methodologies for the Automatic Location of Academic and Educational Texts on the Internet

    Get PDF
    Traditionally online databases of web resources have been compiled by a human editor, or though the submissions of authors or interested parties. Considerable resources are needed to maintain a constant level of input and relevance in the face of increasing material quantity and quality, and much of what is in databases is of an ephemeral nature. These pressures dictate that many databases stagnate after an initial period of enthusiastic data entry. The solution to this problem would seem to be the automatic harvesting of resources, however, this process necessitates the automatic classification of resources as ‘appropriate’ to a given database, a problem only solved by complex text content analysis. This paper outlines the component methodologies necessary to construct such an automated harvesting system, including a number of novel approaches. In particular this paper looks at the specific problems of automatically identifying academic research work and Higher Education pedagogic materials. Where appropriate, experimental data is presented from searches in the field of Geography as well as the Earth and Environmental Sciences. In addition, appropriate software is reviewed where it exists, and future directions are outlined

    Temporal Isolation Among LTE/5G Network Functions by Real-time Scheduling

    Get PDF
    Radio access networks for future LTE/5G scenarios need to be designed so as to satisfy increasingly stringent requirements in terms of overall capacity, individual user performance, flexibility and power efficiency. This is triggering a major shift in the Telcom industry from statically sized, physically provisioned network appliances towards the use of virtualized network functions that can be elastically deployed within a flexible private cloud of network operators. However, a major issue in delivering strong QoS levels is the one to keep in check the temporal interferences among co-located services, as they compete in accessing shared physical resources. In this paper, this problem is tackled by proposing a solution making use of a real-time scheduler with strong temporal isolation guarantees at the OS/kernel level. This allows for the development of a mathematical model linking major parameters of the system configuration and input traffic characterization with the achieved performance and response-time probabilistic distribution. The model is verified through extensive experiments made on Linux on a synthetic benchmark tuned according to data from a real LTE packet processing scenario

    Strong Temporal Isolation among Containers in OpenStack for NFV Services

    Get PDF
    In this paper, the problem of temporal isolation among containerized software components running in shared cloud infrastructures is tackled, proposing an approach based on hierarchical real-time CPU scheduling. This allows for reserving a precise share of the available computing power for each container deployed in a multi-core server, so to provide it with a stable performance, independently from the load of other co-located containers. The proposed technique enables the use of reliable modeling techniques for end-to-end service chains that are effective in controlling the application-level performance. An implementation of the technique within the well-known OpenStack cloud orchestration software is presented, focusing on a use-case framed in the context of network function virtualization. The modified OpenStack is capable of leveraging the special real-time scheduling features made available in the underlying Linux operating system through a patch to the in-kernel process scheduler. The effectiveness of the technique is validated by gathering performance data from two applications running in a real test-bed with the mentioned modifications to OpenStack and the Linux kernel. A performance model is developed that tightly models the application behavior under a variety of conditions. Extensive experimentation shows that the proposed mechanism is successful in guaranteeing isolation of individual containerized activities on the platform

    Who is Really Undermining the Patent System – “Patent Trolls” or Congress?, 6 J. Marshall Rev. Intell. Prop. L. 185 (2007)

    Get PDF
    “Patent troll” has entered the legal lexicon, stirring up heated debates over fundamental issues of patent rights. This article discusses the etymology of the term “patent troll” —from its beginnings as a deliberately derogatory term thrust forward as a defense to weaken the enforcement of patents against large corporations to its current manifestation as a call for patent reform. Interestingly, statistics show the “patent troll” problem is grossly overstated compared to the contentions of the corporate world. Moreover, enforcement of patents stimulates small business growth, innovation, and dissemination of knowledge to the public. This article suggests Congressional diversion of PTO funding as a more pressing issue burdening an already overworked system, increasing the duration of patent prosecution and diminishing overall patent quality. Resolution of these issues will better serve to “promote the useful arts” than a misguided effort against “patent trolls.

    Performance Modeling of Softwarized Network Services Based on Queuing Theory with Experimental Validation

    Get PDF
    Network Functions Virtualization facilitates the automation of the scaling of softwarized network services (SNSs). However, the realization of such a scenario requires a way to determine the needed amount of resources so that the SNSs performance requisites are met for a given workload. This problem is known as resource dimensioning, and it can be efficiently tackled by performance modeling. In this vein, this paper describes an analytical model based on an open queuing network of G/G/m queues to evaluate the response time of SNSs. We validate our model experimentally for a virtualized Mobility Management Entity (vMME) with a three-tiered architecture running on a testbed that resembles a typical data center virtualization environment. We detail the description of our experimental setup and procedures. We solve our resulting queueing network by using the Queueing Networks Analyzer (QNA), Jackson’s networks, and Mean Value Analysis methodologies, and compare them in terms of estimation error. Results show that, for medium and high workloads, the QNA method achieves less than half of error compared to the standard techniques. For low workloads, the three methods produce an error lower than 10%. Finally, we show the usefulness of the model for performing the dynamic provisioning of the vMME experimentally.This work has been partially funded by the H2020 research and innovation project 5G-CLARITY (Grant No. 871428)National research project 5G-City: TEC2016-76795-C6-4-RSpanish Ministry of Education, Culture and Sport (FPU Grant 13/04833). We would also like to thank the reviewers for their valuable feedback to enhance the quality and contribution of this wor
    corecore