20 research outputs found

    Oblikovanje programskih jezika za događajima poticanu kompoziciju usluga

    Get PDF
    To adapt to rapidly changing market conditions and increase the return of investment, today’s IT solutions usually combine service-oriented architecture (SOA) and event-driven architecture (EDA) that support reusability, flexibility, and responsiveness of business processes. Programming languages for development of event-driven service compositions face several main challenges. First, a language should be based on standard service composition languages to be compatible with SOA-enabling technologies. Second, a language should enable seamless integration of services into event-driven workflows. Third, to overcome a knowledge divide, language should enable seamless cooperation between application developers with different skills and knowledge. Since WS-BPEL is widely accepted as standard executable language in SOA, we extended WS-BPEL with support for event-driven workflow coordination. We designed event-handling mechanisms as special-purpose Coopetition services and augmented WS-BPEL with primitives for their invocation. Coopetition services augment SOA with fundamental EDA characteristics: decoupled interactions, many-to-many communication, publish/subscribe messaging, event triggering, and asynchronous operations. To make the application development familiar to wide community of developers, we designed an application-level end-user language on top of WS-BPEL whose primitives for invocation of regular Web services and Coopetition services resemble the constructs of typical scripting and coordination language.S ciljem prilagodbe promjenjivim tržišnim uvjetima i povećanja isplativosti ulaganja, današnji informacijski sustavi grade se spregom uslužno usmjerene i događajima poticane arhitekture koje omogućuju oblikovanje višestruko iskoristivih i prilagodljivih poslovnih procesa s mogućnošću odziva na pojavu događaja. Programski jezici za događajima poticanu kompoziciju usluga pokazuju nekoliko glavnih značajki. Prvo, jezik mora naslijediti svojstva standardnih jezika za kompoziciju usluga kako bi bio sukladan tehnologijama uslužno-usmjerene arhitekture. Drugo, jezik mora omogućiti prirodni način povezivanja usluga u događajima poticane poslovne procese. Treće, razvijateljima različitih znanja i vještina potrebno je osigurati mogućnost udruženog sudjelovanja u razvoju primjenskih programa. Budući da je WS-BPEL standardni jezik za kompoziciju usluga, izabran je kao osnovica za oblikovanje jezika za događajima poticanu kompoziciju usluga. Oblikovan je poseban skup usluga suradnje i natjecanja kojima je uslužno-usmjerena arhitektura proširena elementima događajima poticane arhitekture, kao što su međudjelovanje zasnovano na slaboj povezivosti, komunikacija u grupi, objava/pretplata, reakcija na pojavu događaja i asinkrone operacije. Jezik WS-BPEL proširen je programskim primitivama za pozivanje tih usluga. Kako bi se razvoj primjenskih programa približio širokoj zajednici graditelja programske potpore, povrh jezika WS-BPEL oblikovan je primjenski jezik za krajnjeg korisnika čije primitive za pozivanje primjenskih usluga te usluga suradnje i natjecanja nalikuju naredbama skriptnih i koordinacijskih jezika

    Single system image: A survey

    Get PDF
    Single system image is a computing paradigm where a number of distributed computing resources are aggregated and presented via an interface that maintains the illusion of interaction with a single system. This approach encompasses decades of research using a broad variety of techniques at varying levels of abstraction, from custom hardware and distributed hypervisors to specialized operating system kernels and user-level tools. Existing classification schemes for SSI technologies are reviewed, and an updated classification scheme is proposed. A survey of implementation techniques is provided along with relevant examples. Notable deployments are examined and insights gained from hands-on experience are summarized. Issues affecting the adoption of kernel-level SSI are identified and discussed in the context of technology adoption literature

    Transistor scaled HPC application performance

    Full text link
    We propose a radically new, biologically inspired, model of extreme scale computer on which ap- plication performance automatically scales with the transistor count even in the face of component failures. Today high performance computers are massively parallel systems composed of potentially hundreds of thousands of traditional processor cores, formed from trillions of transistors, consuming megawatts of power. Unfortunately, increasing the number of cores in a system, unlike increasing clock frequencies, does not automatically translate to application level improvements. No general auto-parallelization techniques or tools exist for HPC systems. To obtain application improvements, HPC application programmers must manually cope with the challenge of multicore programming and the significant drop in reliability associated with the sheer number of transistors. Drawing on biological inspiration, the basic premise behind this work is that computation can be dramatically accelerated by integrating a very large-scale, system-wide, predictive associative memory into the operation of the computer. The memory effectively turns computation into a form of pattern recognition and prediction whose result can be used to avoid significant fractions of computation. To be effective the expectation is that the memory will require billions of concurrent devices akin to biological cortical systems, where each device implements a small amount of storage, computation and localized communication. As typified by the recent announcement of the Lyric GP5 Probability Processor, very efficient scalable hardware for pattern recognition and prediction are on the horizon. One class of such devices, called neuromorphic, was pioneered by Carver Mead in the 80’s to provide a path for breaking the power, scaling, and reliability barriers associated with standard digital VLSI tech- nology. Recent neuromorphic research examples include work at Stanford, MIT, and the DARPA Sponsored SyNAPSE Project. These devices operate transistors as unclocked analog devices orga- nized to implement pattern recognition and prediction several orders of magnitude more efficiently than functionally equivalent digital counterparts. Abstractly, the devices can be used to implement modern machine learning or statistical inference. When exposed to data as a time-varying signal, the devices learn and store patterns in the data at multiple time scales and constantly provide predictions about what the signal will do in the future. This kind of function can be seen as a form of predictive associative memory. In this paper we describe our model and initial plans for exploring it.Department of Energy Office of Science (DE-SC0005365), National Science Foundation (1012798

    Processor allocator for chip multiprocessors

    Full text link
    Chip MultiProcessor (CMP) architectures consisting of many cores connected through Network-on-Chip (NoC) are becoming main computing platforms for research and computer centers, and in the future for commercial solutions. In order to effectively use CMPs, operating system is an important factor and it should support a multiuser environment in which many parallel jobs are executed simultaneously. It is done by the processor management system of the operating system, which consists of two components: Job Scheduler (JS) and Processor Allocator (PA). The JS is responsible for job scheduling that deals with selection of the next job to be executed, while the task of the PA is processor allocation that selects a set of processors for the job selected by the JS. In this thesis, the PA architecture for the NoC-based CMP is explored. The idea of the PA hardware implementation and its integration on one die together with processing elements of CMP is presented. Such an approach requires the PA to be fast as well as area and energy efficient, because it is only a small component of the CMP. The architecture of hardware version of a PA is presented. The main factor of the structure is a type of processor allocation algorithm, employed inside. Thus, all important allocation techniques are intensively investigated and new schemes are proposed. All of them are compared using experimentation system. The PA driven by the described allocation techniques is synthesized on FPGA and crucial energy and area consumption together with performance parameters are extracted. The proposed CMP uses NoC as interconnection architecture. Therefore, all main NoC structures are studied and tested. Most important parameters such as topology, flow control and routing algorithms are presented and discussed. For the proposed NoC structures, an energy model is proposed and described. Finally, the synthesized PAs and NoCs are evaluated in a simulation system, where NoC-based CMP is created. The experimental environment took into consideration energy and traffic balance characteristics. As a result, the most efficient PA and NoC for CMP are presented

    Research in Structures and Dynamics, 1984

    Get PDF
    A symposium on advanced and trends in structures and dynamics was held to communicate new insights into physical behavior and to identify trends in the solution procedures for structures and dynamics problems. Pertinent areas of concern were (1) multiprocessors, parallel computation, and database management systems, (2) advances in finite element technology, (3) interactive computing and optimization, (4) mechanics of materials, (5) structural stability, (6) dynamic response of structures, and (7) advanced computer applications

    A data-driven study of operating system energy-performance trade-offs towards system self optimization

    Get PDF
    This dissertation is motivated by an intersection of changes occurring in modern software and hardware; driven by increasing application performance and energy requirements while Moore's Law and Dennard Scaling are facing challenges of diminishing returns. To address these challenging requirements, new features are increasingly being packed into hardware to support new offloading capabilities, as well as more complex software policies to manage these features. This is leading to an exponential explosion in the number of possible configurations of both software and hardware to meet these requirements. For network-based applications, this thesis demonstrates how these complexities can be tamed by identifying and exploiting the characteristics of the underlying system through a rigorous and novel experimental study. This thesis demonstrates how one can simplify this control strategy problem in practical settings by cutting across the complexity through the use of mechanisms that exploit two fundamental properties of network processing. Using the common request-response network processing model, this thesis finds that controlling 1) the speed of network interrupts and 2) the speed at which the request is then executed, enables the characterization of the software and hardware in a stable and well-structured manner. Specifically, a network device's interrupt delay feature is used to control the rate of incoming and outgoing network requests and a processor's frequency setting was used to control the speed of instruction execution. This experimental study, conducted using 340 unique combinations of the two mechanisms, across 2 OSes and 4 applications, finds that optimizing these settings in an application-specific way can result in characteristic performance improvements over 2X while improving energy efficiency by over 2X
    corecore