1,111 research outputs found

    The AliEn system, status and perspectives

    Full text link
    AliEn is a production environment that implements several components of the Grid paradigm needed to simulate, reconstruct and analyse HEP data in a distributed way. The system is built around Open Source components, uses the Web Services model and standard network protocols to implement the computing platform that is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The aim of this paper is to present the current AliEn architecture and outline its future developments in the light of emerging standards.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, Word, 10 figures. PSN MOAT00

    Implementing SaaS Solution for CRM

    Get PDF
    Greatest innovations in virtualization and distributed computing have accelerated interest in cloud computing (IaaS, PaaS, SaaS, aso). This paper presents the SaaS prototype for Customer Relationship Management of a real estate company. Starting from several approaches of e-marketing and SaaS features and architectures, we adopted a model for a CRM solution using SaaS Level 2 architecture and distributed database. Based on the system objective, functionality, we developed a modular solution for solve CRM and e-marketing targets in real estate companies.E-Marketing, SaaS Architecture, Modular Development

    Data Storage and Dissemination in Pervasive Edge Computing Environments

    Get PDF
    Nowadays, smart mobile devices generate huge amounts of data in all sorts of gatherings. Much of that data has localized and ephemeral interest, but can be of great use if shared among co-located devices. However, mobile devices often experience poor connectivity, leading to availability issues if application storage and logic are fully delegated to a remote cloud infrastructure. In turn, the edge computing paradigm pushes computations and storage beyond the data center, closer to end-user devices where data is generated and consumed. Hence, enabling the execution of certain components of edge-enabled systems directly and cooperatively on edge devices. This thesis focuses on the design and evaluation of resilient and efficient data storage and dissemination solutions for pervasive edge computing environments, operating with or without access to the network infrastructure. In line with this dichotomy, our goal can be divided into two specific scenarios. The first one is related to the absence of network infrastructure and the provision of a transient data storage and dissemination system for networks of co-located mobile devices. The second one relates with the existence of network infrastructure access and the corresponding edge computing capabilities. First, the thesis presents time-aware reactive storage (TARS), a reactive data storage and dissemination model with intrinsic time-awareness, that exploits synergies between the storage substrate and the publish/subscribe paradigm, and allows queries within a specific time scope. Next, it describes in more detail: i) Thyme, a data storage and dis- semination system for wireless edge environments, implementing TARS; ii) Parsley, a flexible and resilient group-based distributed hash table with preemptive peer relocation and a dynamic data sharding mechanism; and iii) Thyme GardenBed, a framework for data storage and dissemination across multi-region edge networks, that makes use of both device-to-device and edge interactions. The developed solutions present low overheads, while providing adequate response times for interactive usage and low energy consumption, proving to be practical in a variety of situations. They also display good load balancing and fault tolerance properties.Resumo Hoje em dia, os dispositivos mĂłveis inteligentes geram grandes quantidades de dados em todos os tipos de aglomeraçÔes de pessoas. Muitos desses dados tĂȘm interesse loca- lizado e efĂȘmero, mas podem ser de grande utilidade se partilhados entre dispositivos co-localizados. No entanto, os dispositivos mĂłveis muitas vezes experienciam fraca co- nectividade, levando a problemas de disponibilidade se o armazenamento e a lĂłgica das aplicaçÔes forem totalmente delegados numa infraestrutura remota na nuvem. Por sua vez, o paradigma de computação na periferia da rede leva as computaçÔes e o armazena- mento para alĂ©m dos centros de dados, para mais perto dos dispositivos dos utilizadores finais onde os dados sĂŁo gerados e consumidos. Assim, permitindo a execução de certos componentes de sistemas direta e cooperativamente em dispositivos na periferia da rede. Esta tese foca-se no desenho e avaliação de soluçÔes resilientes e eficientes para arma- zenamento e disseminação de dados em ambientes pervasivos de computação na periferia da rede, operando com ou sem acesso Ă  infraestrutura de rede. Em linha com esta dico- tomia, o nosso objetivo pode ser dividido em dois cenĂĄrios especĂ­ficos. O primeiro estĂĄ relacionado com a ausĂȘncia de infraestrutura de rede e o fornecimento de um sistema efĂȘmero de armazenamento e disseminação de dados para redes de dispositivos mĂłveis co-localizados. O segundo diz respeito Ă  existĂȘncia de acesso Ă  infraestrutura de rede e aos recursos de computação na periferia da rede correspondentes. Primeiramente, a tese apresenta armazenamento reativo ciente do tempo (ARCT), um modelo reativo de armazenamento e disseminação de dados com percepção intrĂ­nseca do tempo, que explora sinergias entre o substrato de armazenamento e o paradigma pu- blicação/subscrição, e permite consultas num escopo de tempo especĂ­fico. De seguida, descreve em mais detalhe: i) Thyme, um sistema de armazenamento e disseminação de dados para ambientes sem fios na periferia da rede, que implementa ARCT; ii) Pars- ley, uma tabela de dispersĂŁo distribuĂ­da flexĂ­vel e resiliente baseada em grupos, com realocação preventiva de nĂłs e um mecanismo de particionamento dinĂąmico de dados; e iii) Thyme GardenBed, um sistema para armazenamento e disseminação de dados em redes multi-regionais na periferia da rede, que faz uso de interaçÔes entre dispositivos e com a periferia da rede. As soluçÔes desenvolvidas apresentam baixos custos, proporcionando tempos de res- posta adequados para uso interativo e baixo consumo de energia, demonstrando serem prĂĄticas nas mais diversas situaçÔes. Estas soluçÔes tambĂ©m exibem boas propriedades de balanceamento de carga e tolerĂąncia a faltas

    Design and Implementation of a Measurement-Based Policy-Driven Resource Management Framework For Converged Networks

    Full text link
    This paper presents the design and implementation of a measurement-based QoS and resource management framework, CNQF (Converged Networks QoS Management Framework). CNQF is designed to provide unified, scalable QoS control and resource management through the use of a policy-based network management paradigm. It achieves this via distributed functional entities that are deployed to co-ordinate the resources of the transport network through centralized policy-driven decisions supported by measurement-based control architecture. We present the CNQF architecture, implementation of the prototype and validation of various inbuilt QoS control mechanisms using real traffic flows on a Linux-based experimental test bed.Comment: in Ictact Journal On Communication Technology: Special Issue On Next Generation Wireless Networks And Applications, June 2011, Volume 2, Issue 2, Issn: 2229-6948(Online

    GRIDKIT: Pluggable overlay networks for Grid computing

    Get PDF
    A `second generation' approach to the provision of Grid middleware is now emerging which is built on service-oriented architecture and web services standards and technologies. However, advanced Grid applications have significant demands that are not addressed by present-day web services platforms. As one prime example, current platforms do not support the rich diversity of communication `interaction types' that are demanded by advanced applications (e.g. publish-subscribe, media streaming, peer-to-peer interaction). In the paper we describe the Gridkit middleware which augments the basic service-oriented architecture to address this particular deficiency. We particularly focus on the communications infrastructure support required to support multiple interaction types in a unified, principled and extensible manner-which we present in terms of the novel concept of pluggable overlay networks

    Distributed Mail Transfer Agent

    Get PDF
    Technological advances have provided society with the means to easily communicate through several channels, starting off in radio and television stations, moving on through E-mail and SMS, and nowadays targeting Internet surfing through channels such as Google Ads and Webpush notifications. Digital marketing has flooded these channels for product promotion and customer engaging purposes in order to provide the customers with the best the organizations have to offer. E-goi is a web platform whose main objective is to facilitate digital marketing to all its customers, ranging from SMB to Corporate/Enterprise, and aid them to strengthen their relationships with its customers through digital communication. The platform’s most widely used channel is E-mail which is responsible for about fifteen million deliveries per day. The email delivery system currently employed by E-goi is functional and fault-tolerant to a certain degree, however, it has several flaws, such as its monolithic architecture, which is responsible for high hardware usage and lack of layer centralization, and the lack of deliverability related functionalities. This thesis aims to analyze and improve the E-goi’s e-mail delivery system architecture, which represents a critical system and of most importance and value for the product and the company. Business analysis tools will be used in this analysis to prove the value created for the company and its product, aiming at maintenance and infrastructure cost reduction as well as the increment in functionalities, both of which comprise valid points for creating business value. The project main objectives comprise an extensive analysis of the currently employed solution and the context to which it belongs to, followed up by a comparative discussion of currently existent competitors and technologies which may be of aid in the development of a new solution. Moving on, the solution’s functional and non-functional requirements gathering will take place. These requirements will dictate how the solution shall be developed. A thorough analysis of the project’s value will follow, discussing which solution will bring the most value to E-goi as a product and organization. Upon deciding on the best solution, its design will be developed based on the previously gathered requirements and the best software design patterns, and will support the implementation phase which follows. Once implemented, the solution will need to surpass several defined tests and hypothesis which will ensure its performance and robustness. Finally, the conclusion will summarize all the project results and define future work for the newly created solution.O avanço tecnolĂłgico forneceu Ă  sociedade a facilidade de comunicação atravĂ©s dos demais canais, começando em rĂĄdios e televisĂ”es, passando pelo E-mail e SMS, atingindo, hoje em dia, a prĂłpria navegação na Internet atravĂ©s dos mais diversos canais como o Google Ads e notificaçÔes Webpush. Todos estes canais de comunicação sĂŁo hoje em dia usados como base da promoção, o marketing digital invadiu estes canais de maneira a conseguir alcançar os mais diversos tipos de clientes e lhes proporcionar o melhor que as organizaçÔes tĂȘm para oferecer. A E-goi Ă© uma plataforma web que pretende facilitar o marketing digital a todos os seus clientes, desde a PME Ă  Enterprise, e ajudĂĄ-los a fortalecer as relaçÔes com os seus clientes atravĂ©s de comunicação digital. O canal mais usado da plataforma Ă© o E-mail, totalizando, hoje em dia, cerca de quinze milhĂ”es de entregas por dia. O sistema de envio de e-mails usado hoje em dia pelo produto E-goi Ă© funcional e tolerante a falhas atĂ© um certo nĂ­vel, no entanto, apresenta diversas lacunas tanto na arquitetura monolĂ­tica do mesmo, responsĂĄvel por um uso de hardware elevado e falta de centralização de camadas, como em funcionalidades ligadas Ă  entregabilidade. O presente projeto visa a anĂĄlise e melhoria da arquitetura do sistema de envio de e-mails da plataforma E-goi, um sistema crĂ­tico e de alta importĂąncia e valor para a empresa. Ao longo desta anĂĄlise, serĂŁo usadas ferramentas de anĂĄlise de negĂłcio para provar o valor criado para a organização e para o produto com vista Ă  redução de custos de manutenção e infraestrutura bem como o aumento de funcionalidades, ambos pontos vĂĄlidos na adição de valor organizacional. Os objetivos do projeto passarĂŁo por uma anĂĄlise extensiva da solução presente e do contexto em que a mesma se insere, passando a uma comparação com soluçÔes concorrentes e tecnologias, existentes no mercado de hoje em dia, que possam ajudar no desenvolvimento de uma nova solução. Seguir-se-ĂĄ um levantamento dos requisitos, tanto funcionais como nĂŁo-funcionais do sistema que ditarĂŁo os moldes sobre os quais o novo sistema deverĂĄ ser desenvolvido. ApĂłs isto, dar-se-ĂĄ uma extensa anĂĄlise do valor do projecto e da solução que mais valor adicionarĂĄ Ă  E-goi, quer como produto e como organização. De seguida efectuar-se-ĂĄ o Design da solução com base nos requisitos definidos e nas melhores prĂĄticas de engenharia informĂĄtica, design este que servirĂĄ de base Ă  implementação que se darĂĄ de seguida e serĂĄ provada atravĂ©s da elaboração de diversos testes que garantirĂŁo a performance, robustez e validade do sistema criado. Finalmente seguir-se-ĂĄ a conclusĂŁo que visa resumir os resultados do projecto e definir trabalho futuro para a solução criada

    The AutoSPADA Platform: User-Friendly Edge Computing for Distributed Learning and Data Analytics in Connected Vehicles

    Full text link
    Contemporary connected vehicles host numerous applications, such as diagnostics and navigation, and new software is continuously being developed. However, the development process typically requires offline batch processing of large data volumes. In an edge computing approach, data analysts and developers can instead process sensor data directly on computational resources inside vehicles. This enables rapid prototyping to shorten development cycles and reduce the time to create new business values or insights. This paper presents the design, implementation, and operation of the AutoSPADA edge computing platform for distributed data analytics. The platform's design follows scalability, reliability, resource efficiency, privacy, and security principles promoted through mature and industrially proven technologies. In AutoSPADA, computational tasks are general Python scripts, and we provide a library to, for example, read signals from the vehicle and publish results to the cloud. Hence, users only need Python knowledge to use the platform. Moreover, the platform is designed to be extended to support additional programming languages.Comment: 14 pages, 4 figures, 3 tables, 1 algorithm, 1 code listin

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    Virtual Log-Structured Storage for High-Performance Streaming

    Get PDF
    International audienceOver the past decade, given the higher number of data sources (e.g., Cloud applications, Internet of things) and critical business demands, Big Data transitioned from batchoriented to real-time analytics. Stream storage systems, such as Apache Kafka, are well known for their increasing role in real-time Big Data analytics. For scalable stream data ingestion and processing, they logically split a data stream topic into multiple partitions. Stream storage systems keep multiple data stream copies to protect against data loss while implementing a stream partition as a replicated log. This architectural choice enables simplified development while trading cluster size with performance and the number of streams optimally managed. This paper introduces a shared virtual log-structured storage approach for improving the cluster throughput when multiple producers and consumers write and consume in parallel data streams. Stream partitions are associated with shared replicated virtual logs transparently to the user, effectively separating the implementation of stream partitioning (and data ordering) from data replication (and durability). We implement the virtual log technique in the KerA stream storage system. When comparing with Apache Kafka, KerA improves the cluster ingestion throughput (for replication factor three) by up to 4x when multiple producers write over hundreds of data streams
    • 

    corecore