26,221 research outputs found

    Characterizing Deep-Learning I/O Workloads in TensorFlow

    Full text link
    The performance of Deep-Learning (DL) computing frameworks rely on the performance of data ingestion and checkpointing. In fact, during the training, a considerable high number of relatively small files are first loaded and pre-processed on CPUs and then moved to accelerator for computation. In addition, checkpointing and restart operations are carried out to allow DL computing frameworks to restart quickly from a checkpoint. Because of this, I/O affects the performance of DL applications. In this work, we characterize the I/O performance and scaling of TensorFlow, an open-source programming framework developed by Google and specifically designed for solving DL problems. To measure TensorFlow I/O performance, we first design a micro-benchmark to measure TensorFlow reads, and then use a TensorFlow mini-application based on AlexNet to measure the performance cost of I/O and checkpointing in TensorFlow. To improve the checkpointing performance, we design and implement a burst buffer. We find that increasing the number of threads increases TensorFlow bandwidth by a maximum of 2.3x and 7.8x on our benchmark environments. The use of the tensorFlow prefetcher results in a complete overlap of computation on accelerator and input pipeline on CPU eliminating the effective cost of I/O on the overall performance. The use of a burst buffer to checkpoint to a fast small capacity storage and copy asynchronously the checkpoints to a slower large capacity storage resulted in a performance improvement of 2.6x with respect to checkpointing directly to slower storage on our benchmark environment.Comment: Accepted for publication at pdsw-DISCS 201

    Suitable task allocation in intelligent systems for assistive environments

    Get PDF
    The growing need of technological assistance to provide support to people with special needs demands for systems more and more efficient and with better performances. With this aim, this work tries to advance in a multirobot platform that allows the coordinated control of different agents and other elements in the environment to achieve an autonomous behavior based on the user’s needs or will. Therefore, this environment is structured according to the potentiality of each agent and elements of this environment and of the dynamic context, to generate the adequate actuation plans and the coordination of their execution.Peer ReviewedPostprint (author's final draft

    IAC level "O" program development

    Get PDF
    The current status of the IAC development activity is summarized. The listed prototype software and documentation was delivered, and details were planned for development of the level 1 operational system. The planned end product IAC is required to support LSST design analysis and performance evaluation, with emphasis on the coupling of required technical disciplines. The long term IAC effectively provides two distinct features: a specific set of analysis modules (thermal, structural, controls, antenna radiation performance and instrument optical performance) that will function together with the IAC supporting software in an integrated and user friendly manner; and a general framework whereby new analysis modules can readily be incorporated into IAC or be allowed to communicate with it

    Towards a flexible service integration through separation of business rules

    Get PDF
    Driven by dynamic market demands, enterprises are continuously exploring collaborations with others to add value to their services and seize new market opportunities. Achieving enterprise collaboration is facilitated by Enterprise Application Integration and Business-to-Business approaches that employ architectural paradigms like Service Oriented Architecture and incorporate technological advancements in networking and computing. However, flexibility remains a major challenge related to enterprise collaboration. How can changes in demands and opportunities be reflected in collaboration solutions with minimum time and effort and with maximum reuse of existing applications? This paper proposes an approach towards a more flexible integration of enterprise applications in the context of service mediation. We achieve this by combining goal-based, model-driven and serviceoriented approaches. In particular, we pay special attention to the separation of business rules from the business process of the integration solution. Specifying the requirements as goal models, we separate those parts which are more likely to evolve over time in terms of business rules. These business rules are then made executable by exposing them as Web services and incorporating them into the design of the business process.\ud Thus, should the business rules change, the business process remains unaffected. Finally, this paper also provides an evaluation of the flexibility of our solution in relation to the current work in business process flexibility research

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft
    • …
    corecore