3,935 research outputs found

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Image Manipulation via Multi-Hop Instructions -- A New Dataset and Weakly-Supervised Neuro-Symbolic Approach

    Full text link
    We are interested in image manipulation via natural language text -- a task that is useful for multiple AI applications but requires complex reasoning over multi-modal spaces. We extend recently proposed Neuro Symbolic Concept Learning (NSCL), which has been quite effective for the task of Visual Question Answering (VQA), for the task of image manipulation. Our system referred to as NeuroSIM can perform complex multi-hop reasoning over multi-object scenes and only requires weak supervision in the form of annotated data for VQA. NeuroSIM parses an instruction into a symbolic program, based on a Domain Specific Language (DSL) comprising of object attributes and manipulation operations, that guides its execution. We create a new dataset for the task, and extensive experiments demonstrate that NeuroSIM is highly competitive with or beats SOTA baselines that make use of supervised data for manipulation.Comment: EMNLP 2023 (long paper, main conference

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Hospital Lobby Assistant Robot

    Get PDF
    The primary goal of this MQP is to produce a user friendly robot that assists users in navigating a given area. Specifically, this group chose to focus on a hospital setting. In order to achieve this goal, the group designed a robot that was capable of navigation, possessed an arm with which to open doors, and that could be commanded by users from an intuitive on-board UI. A secondary goal of this MQP was to create an extensible platform for future groups. To achieve this goal, the team designed a modular backend system so that new modules could be added to the robot in the future with minimal time spent by future groups on integration

    Technology transfer from NASA to targeted industries, volume 2

    Get PDF
    This volume contains the following materials to support Volume 1: (1) Survey of Metal Fabrication Industry in Alabama; (2) Survey of Electronics Manufacturing/Assembly Industry in Alabama; (3) Apparel Modular Manufacturing Simulators; (4) Synopsis of a Stereolithography Project; (5) Transferring Modular Manufacturing Technology to an Apparel Firm; (6) Letters of Support; (7) Fact Sheets; (8) Publications; and (9) One Stop Access to NASA Technology Brochure

    An approach to the machine front-end services for the CIM-Open System Architecture (CIM-OSA)

    Get PDF

    Design and development of a hybrid flexible manufacturing system : a thesis presented in fulfilment of the requirements for the degree of Master of Technology at Massey University

    Get PDF
    Volumes 1 and 2 merged.The ability of a manufacturing environment to be able to modify itself and to incorporate a wide variety of heterogeneous multi-vendor devices is becoming a matter of increasing importance in the modern manufacturing enterprise. Many companies in the past have been forced to procure devices which are compatible with existing systems but are not as suitable as other less compatible devices. The inability to be able to integrate new devices into an existing company has made such enterprises dependent on one vendor and has decreased their ability to be able to respond to changes in the market. It is said that typically 60% of orders received in a company are new orders. Therefore the ability of a company to be able to reconfigure itself and respond to such demands and reintegrate itself with new equipment requirements is of paramount importance. In the past much effort has been made towards the integration of shop floor devices in industry whereby such devices can communicate with each other so that certain tasks are able to be achieved in a single environment. Up until recently however much of this was carried out in a very much improvised fashion with no real structure existing within the factory. This meant that once the factory was set up it became a hard-wired entity and extensibility and modiflability were difficult indeed. When formalised Computer Integrated Manufacturing (CIM) system architectures were developed it was found that although they solved many existing shortcomings there were inherent problems associated with these as well. What became apparent was that a fresh approach was required that took the advantages of existing architectures and combined them into an new architecture that not only capitalised on these advantages but also nullified the weaknesses of the existing systems. This thesis outlines the design of a new FMS architecture and its implementation in a factory environment on a PC based system

    Applications integration for manufacturing control systems with particular reference to software interoperability issues

    Get PDF
    The introduction and adoption of contemporary computer aided manufacturing control systems (MCS) can help rationalise and improve the productivity of manufacturing related activities. Such activities include product design, process planning and production management with CAD, CAPP and CAPM. However, they tend to be domain specific and would generally have been designed as stand-alone systems where there is a serious lack of consideration for integration requirements with other manufacturing activities outside the area of immediate concern. As a result, "islands of computerisation" exist which exhibit deficiencies and constraints that inhibit or complicate subsequent interoperation among typical MCS components. As a result of these interoperability constraints, contemporary forms of MCS typically yield sub-optimal benefits and do not promote synergy on an enterprise-wide basis. The move towards more integrated manufacturing systems, which requires advances in software interoperability, is becoming a strategic issue. Here the primary aim is to realise greater functional synergy between software components which span engineering, production and management activities and systems. Hence information of global interest needs to be shared across conventional functional boundaries between enterprise functions. The main thrust of this research study is to derive a new generation of MCS in which software components can "functionally interact" and share common information through accessing distributed data repositories in an efficient, highly flexible and standardised manner. It addresses problems of information fragmentation and the lack of formalism, as well as issues relating to flexibly structuring interactions between threads of functionality embedded within the various components. The emphasis is on the: • definition of generic information models which underpin the sharing of common data among production planning, product design, finite capacity scheduling and cell control systems. • development of an effective framework to manage functional interaction between MCS components, thereby coordinating their combined activities. • "soft" or flexible integration of the MCS activities over an integrating infrastructure in order to (i) help simplify typical integration problems found when using contemporary interconnection methods for applications integration; and (ii) enable their reconfiguration and incremental development. In order to facilitate adaptability in response to changing needs, these systems must also be engineered to enable reconfigurability over their life cycle. Thus within the scope of this research study a new methodology and software toolset have been developed to formally structure and support implementation, run-time and change processes. The tool set combines the use of IDEFO (for activity based or functional modelling), IDEFIX (for entity-attribute relationship modelling), and EXPRESS (for information modelling). This research includes a pragmatic but effective means of dealing with legacyl software, which often may be a vital source of readily available information which supports the operation of the manufacturing enterprise. The pragmatism and medium term relevance of the research study has promoted particular interest and collaboration from software manufacturers and industrial practitioners. Proof of concept studies have been carried out to implement and evaluate the developed mechanisms and software toolset
    • …
    corecore