376 research outputs found

    Automated tools and techniques for distributed Grid Software: Development of the testbed infrastructure

    Get PDF
    Grid technology is becoming more and more important as the new paradigm for sharing computational resources across different organizations in a secure way. The great powerfulness of this solution, requires the definition of a generic stack of services and protocols and this is the scope of the different Grid initiatives. As a result of international collaborations for its development, the Open Grid Forum created the Open Grid Services Architecture (OGSA) which aims to define the common set of services that will enable interoperability across the different implementations. This master thesis has been developed in this framework, as part of the two European-funded projects ETICS and OMII-Europe. The main objective is to contribute to the design and maintenance of large distributed development projects with the automated tool that enables to implement Software Engineering techniques oriented to achieve an acceptable level of quality at the release process. Specifically, this thesis develops the testbed concept as the virtual production-like scenario where to perform compliance tests. As proof of concept, the OGSA Basic Execution Service has been chosen in order to implement and execute conformance tests within the ETICS automated testbed framework

    COIN@AAMAS2015

    Get PDF
    COIN@AAMAS2015 is the nineteenth edition of the series and the fourteen papers included in these proceedings demonstrate the vitality of the community and will provide the grounds for a solid workshop program and what we expect will be a most enjoyable and enriching debate.Peer reviewe

    Value Co-Creation in Smart Services: A Functional Affordances Perspective on Smart Personal Assistants

    Get PDF
    In the realm of smart services, smart personal assistants (SPAs) have become a popular medium for value co-creation between service providers and users. The market success of SPAs is largely based on their innovative material properties, such as natural language user interfaces, machine learning-powered request handling and service provision, and anthropomorphism. In different combinations, these properties offer users entirely new ways to intuitively and interactively achieve their goals and thus co-create value with service providers. But how does the nature of the SPA shape value co-creation processes? In this paper, we look through a functional affordances lens to theorize about the effects of different types of SPAs (i.e., with different combinations of material properties) on users’ value co-creation processes. Specifically, we collected SPAs from research and practice by reviewing scientific literature and web resources, developed a taxonomy of SPAs’ material properties, and performed a cluster analysis to group SPAs of a similar nature. We then derived 2 general and 11 cluster-specific propositions on how different material properties of SPAs can yield different affordances for value co-creation. With our work, we point out that smart services require researchers and practitioners to fundamentally rethink value co-creation as well as revise affordances theory to address the dynamic nature of smart technology as a service counterpart

    CBSE: an implementation case study

    Get PDF
    Over the last couple of years, the shift towards component based software engineering (CBSE) methods has become a cost effective way to get an application to implementation stage much earKer. Adoption of Component Based Development methods acknowledges the use of third party components wherever possible to reduce the cost of software development, shorten the development phase and provide a richer set of processing options for the end user. The use of these tools is particularly relevant in Web based applications, where commercial off the shelf (COTS) products are so prevalent. However, there are a number of risks associated with the use of component based development methods. This thesis investigates these risks within the context of a software engineering project and attempts to provide a means to minimise and or at least manage the risk potential when using component based development method

    Gender in Agriculture Sourcebook

    Get PDF
    The purpose of the Sourcebook is to act as a guide for practitioners and technical staff in addressing gender issues and integrating gender-responsive actions in the design and implementation of agricultural projects and programs. It speaks not with gender specialists on how to improve their skills but rather reaches out to technical experts to guide them in thinking through how to integrate gender dimensions into their operations. The Sourcebook aims to deliver practical advice, guidelines, principles, and descriptions and illustrations of approaches that have worked so far to achieve the goal of effective gender mainstreaming in the agricultural operations of development agencies. It captures and expands the main messages of the World Development Report 2008: Agriculture for Development and is considered an important tool to facilitate the operationalization and implementation of the report's key principles on gender equality and women's empowerment

    Deploying building information modeling software on Desktop as a Service platform

    Get PDF
    Desktop as a Service (DaaS) is a novel cloud computing service that provides cloud-based virtual desktops on-demand to end users. The major advantage of DaaS is the capability to quickly deliver expeditious control of a full desktop environment to end users from various device platforms such as Android, iOS, MacOS or Web access from anywhere and at any time. This master thesis is a proof of concept to demonstrate the practicability to deploy the case company's graphics-intensive building information modeling software, Tekla Structures on Amazon Web Services' DaaS solution, named Amazon WorkSpaces. We investigated the whole deployment process of the software to the Amazon WorkSpaces. After clarifying the deployment process, we developed the working prototype consisting of different Amazon Web Services to automate the process. Furthermore, we implemented operational test cases for the prototype and for the Tekla Structures running on Amazon WorkSpaces to determine the feasibility of using this novel cloud service for the production purpose in the case company. In summary, Amazon WorkSpaces is a highly anticipated DaaS solution that can simplify the desktop and software delivery process to the case company's customers. The prototype developed in the thesis can automate the deployment process and launch new Amazon WorkSpaces to a sufficient extent. Moreover, the evaluation shows that the prototype can handle its automation tasks correctly based on the proposed architectural design and the Amazon WorkSpaces with Graphics hardware configuration are capable of operating Tekla Structures impeccably as in physical Windows desktops

    Extending CRM in the Retail Industry: An RFID-Based Personal Shopping Assistant System

    Get PDF
    This paper describes the research and development of a radio frequency identification (RFID)-based personal shopping assistant (PSA) system for retail stores. RFID technology was employed as the key enabler to build a PSA system to optimize operational efficiency and deliver a superior customer shopping experience in retail stores. We show that an RFID-based PSA system can deliver significant results to improve the customer shopping experience and retail store operational efficiency, by increasing customer convenience, providing flexibility in service delivery, enhancing promotional campaign efficiency, and increasing product cross selling and upselling through a customer relationship management (CRM) system. In this study, an RFID value grid for retail stores is proposed that allows managers to use RFID technology in stores to add value to the shopping experience of their customers. Four propositions are presented as the research agenda for examining the ability of RFID technology to improve the operations management of retail stores

    Three IT-Business Alignment Profiles: Technical Resource, Business Enabler, and Strategic Weapon

    Get PDF
    There is a growing recognition among alignment researchers and IT professionals that one size does not fit all. In this article, we provide an important extension of alignment research that shows three profiles linking IT to different business objectives. We address the need to identify the appropriate types of IT alignment by using a multi-method study including interviews and cases. Two dimensions define the three alignment profiles: internal IT-business integration and external market engagement. The technical resource profile calls for low levels of IT-business integration and IT-market engagement. The business enabler profile deploys IT in some business processes and begins engaging IT with customers and suppliers. The strategic weapon profile uses IT to mobilize and extend the enterprise, which requires extensive IT deployment, both internally and externally. Each profile differs in strategies, criteria, capabilities, and mental models. Importantly, IT decision-makers should not adopt stage-model thinking which assumes that technical resource profiles naturally progress up the chain. Rather, successful use of IT requires specifying the requisite alignment profile as an initial design decision so that appropriate levels of resource allocation and management involvement occur

    Contribution à la convergence d'infrastructure entre le calcul haute performance et le traitement de données à large échelle

    Get PDF
    The amount of produced data, either in the scientific community or the commercialworld, is constantly growing. The field of Big Data has emerged to handle largeamounts of data on distributed computing infrastructures. High-Performance Computing (HPC) infrastructures are traditionally used for the execution of computeintensive workloads. However, the HPC community is also facing an increasingneed to process large amounts of data derived from high definition sensors andlarge physics apparati. The convergence of the two fields -HPC and Big Data- iscurrently taking place. In fact, the HPC community already uses Big Data tools,which are not always integrated correctly, especially at the level of the file systemand the Resource and Job Management System (RJMS).In order to understand how we can leverage HPC clusters for Big Data usage, andwhat are the challenges for the HPC infrastructures, we have studied multipleaspects of the convergence: We initially provide a survey on the software provisioning methods, with a focus on data-intensive applications. We contribute a newRJMS collaboration technique called BeBiDa which is based on 50 lines of codewhereas similar solutions use at least 1000 times more. We evaluate this mechanism on real conditions and in simulated environment with our simulator Batsim.Furthermore, we provide extensions to Batsim to support I/O, and showcase thedevelopments of a generic file system model along with a Big Data applicationmodel. This allows us to complement BeBiDa real conditions experiments withsimulations while enabling us to study file system dimensioning and trade-offs.All the experiments and analysis of this work have been done with reproducibilityin mind. Based on this experience, we propose to integrate the developmentworkflow and data analysis in the reproducibility mindset, and give feedback onour experiences with a list of best practices.RĂ©sumĂ©La quantitĂ© de donnĂ©es produites, que ce soit dans la communautĂ© scientifiqueou commerciale, est en croissance constante. Le domaine du Big Data a Ă©mergĂ©face au traitement de grandes quantitĂ©s de donnĂ©es sur les infrastructures informatiques distribuĂ©es. Les infrastructures de calcul haute performance (HPC) sont traditionnellement utilisĂ©es pour l’exĂ©cution de charges de travail intensives en calcul. Cependant, la communautĂ© HPC fait Ă©galement face Ă  un nombre croissant debesoin de traitement de grandes quantitĂ©s de donnĂ©es dĂ©rivĂ©es de capteurs hautedĂ©finition et de grands appareils physique. La convergence des deux domaines-HPC et Big Data- est en cours. En fait, la communautĂ© HPC utilise dĂ©jĂ  des outilsBig Data, qui ne sont pas toujours correctement intĂ©grĂ©s, en particulier au niveaudu systĂšme de fichiers ainsi que du systĂšme de gestion des ressources (RJMS).Afin de comprendre comment nous pouvons tirer parti des clusters HPC pourl’utilisation du Big Data, et quels sont les dĂ©fis pour les infrastructures HPC, nousavons Ă©tudiĂ© plusieurs aspects de la convergence: nous avons d’abord proposĂ© uneĂ©tude sur les mĂ©thodes de provisionnement logiciel, en mettant l’accent sur lesapplications utilisant beaucoup de donnĂ©es. Nous contribuons a l’état de l’art avecune nouvelle technique de collaboration entre RJMS appelĂ©e BeBiDa basĂ©e sur 50lignes de code alors que des solutions similaires en utilisent au moins 1000 fois plus.Nous Ă©valuons ce mĂ©canisme en conditions rĂ©elles et en environnement simulĂ©avec notre simulateur Batsim. En outre, nous fournissons des extensions Ă  Batsimpour prendre en charge les entrĂ©es/sorties et prĂ©sentons le dĂ©veloppements d’unmodĂšle de systĂšme de fichiers gĂ©nĂ©rique accompagnĂ© d’un modĂšle d’applicationBig Data. Cela nous permet de complĂ©ter les expĂ©riences en conditions rĂ©ellesde BeBiDa en simulation tout en Ă©tudiant le dimensionnement et les diffĂ©rentscompromis autours des systĂšmes de fichiers.Toutes les expĂ©riences et analyses de ce travail ont Ă©tĂ© effectuĂ©es avec la reproductibilitĂ© Ă  l’esprit. Sur la base de cette expĂ©rience, nous proposons d’intĂ©grerle flux de travail du dĂ©veloppement et de l’analyse des donnĂ©es dans l’esprit dela reproductibilitĂ©, et de donner un retour sur nos expĂ©riences avec une liste debonnes pratiques
    • 

    corecore