75,004 research outputs found

    Availability modeling and evaluation on high performance cluster computing systems

    Get PDF
    Cluster computing has been attracting more and more attention from both the industrial and the academic world for its enormous computing power, cost effective, and scalability. Beowulf type cluster, for example, is a typical High Performance Computing (HPC) cluster system. Availability, as a key attribute of the system, needs to be considered at the system design stage and monitored at mission time. Moreover, system monitoring is a must to help identify the defects and ensure the system\u27s availability requirement. In this study, novel solutions which provide availability modeling, model evaluation, and data analysis as a single framework have been investigated. Three key components in the investigation are availability modeling, model evaluation, and data analysis. The general availability concepts and modeling techniques are briefly reviewed. The system\u27s availability model is divided into submodels based upon their functionalities. Furthermore, an object oriented Markov model specification to facilitate availability modeling and runtime configuration has been developed. Numerical solutions for Markov models are examined, especially on the uniformization method. Alternative implementations of the method are discussed; particularly on analyzing the cost of an alternative solution for small state space model, and different ways for solving large sparse Markov models. The dissertation also presents a monitoring and data analysis framework, which is responsible for failure analysis and availability reconfiguration. In addition, the event logs provided from the Lawrence Livermore National Laboratory have been studied and applied to validate the proposed techniques

    Programming support for an integrated multi-party computation and MapReduce infrastructure

    Full text link
    We describe and present a prototype of a distributed computational infrastructure and associated high-level programming language that allow multiple parties to leverage their own computational resources capable of supporting MapReduce [1] operations in combination with multi-party computation (MPC). Our architecture allows a programmer to author and compile a protocol using a uniform collection of standard constructs, even when that protocol involves computations that take place locally within each participant’s MapReduce cluster as well as across all the participants using an MPC protocol. The highlevel programming language provided to the user is accompanied by static analysis algorithms that allow the programmer to reason about the efficiency of the protocol before compiling and running it. We present two example applications demonstrating how such an infrastructure can be employed.This work was supported in part by NSF Grants: #1430145, #1414119, #1347522, and #1012798

    A new perspective on the competitiveness of nations

    Get PDF
    The capability of firms to survive and to have a competitive advantage in global markets depends on, amongst other things, the efficiency of public institutions, the excellence of educational, health and communications infrastructures, as well as on the political and economic stability of their home country. The measurement of competitiveness and strategy development is thus an important issue for policy-makers. Despite many attempts to provide objectivity in the development of measures of national competitiveness, there are inherently subjective judgments that involve, for example, how data sets are aggregated and importance weights are applied. Generally, either equal weighting is assumed in calculating a final index, or subjective weights are specified. The same problem also occurs in the subjective assignment of countries to different clusters. Developed as such, the value of these type indices may be questioned by users. The aim of this paper is to explore methodological transparency as a viable solution to problems created by existing aggregated indices. For this purpose, a methodology composed of three steps is proposed. To start, a hierarchical clustering analysis is used to assign countries to appropriate clusters. In current methods, country clustering is generally based on GDP. However, we suggest that GDP alone is insufficient for purposes of country clustering. In the proposed methodology, 178 criteria are used for this purpose. Next, relationships between the criteria and classification of the countries are determined using artificial neural networks (ANNs). ANN provides an objective method for determining the attribute/criteria weights, which are, for the most part, subjectively specified in existing methods. Finally, in our third step, the countries of interest are ranked based on weights generated in the previous step. Beyond the ranking of countries, the proposed methodology can also be used to identify those attributes that a given country should focus on in order to improve its position relative to other countries, i.e., to transition from its current cluster to the next higher one

    Scars of early non-employment for low educated youth: evidence and policy lessons from Belgium

    Get PDF
    This paper investigates whether the early experience of non-employment has a causal impact on workers’ subsequent career. The analysis is based on a sample of low educated youth graduating between 1994 and 2002 in Flanders (Belgium). To correct for selective incidence of non-employment, we instrument early non-employment by the provincial unemployment rate at graduation. Since the instrument is clustered at the province-graduation year level and the number of clusters is small, inference is based on wild bootstrap methods. We find that one percentage point increase in the proportion of time spent in non-employment during the first two and a half years of the career decreases annual earnings from salaried employment six years after graduation by 10% and annual hours worked by 7% (unconditional effects). Thus, any policy that prevents unemployment in the first place will be beneficial. In addition, curative policies at the micro level may be required, depending on the actual cause of the scar

    Engineering a QoS Provider Mechanism for Edge Computing with Deep Reinforcement Learning

    Full text link
    With the development of new system solutions that integrate traditional cloud computing with the edge/fog computing paradigm, dynamic optimization of service execution has become a challenge due to the edge computing resources being more distributed and dynamic. How to optimize the execution to provide Quality of Service (QoS) in edge computing depends on both the system architecture and the resource allocation algorithms in place. We design and develop a QoS provider mechanism, as an integral component of a fog-to-cloud system, to work in dynamic scenarios by using deep reinforcement learning. We choose reinforcement learning since it is particularly well suited for solving problems in dynamic and adaptive environments where the decision process needs to be frequently updated. We specifically use a Deep Q-learning algorithm that optimizes QoS by identifying and blocking devices that potentially cause service disruption due to dynamicity. We compare the reinforcement learning based solution with state-of-the-art heuristics that use telemetry data, and analyze pros and cons

    Identification of specific demands on Feed in Dutch Organic Aquaculture

    Get PDF
    The evaluation of specific demands for organic feed focussed on feed demands for four fish species which can be cultured in Recirculation Aquaculture Systems (RAS), tilapia, African Catfish, shrimp and turbot. The evaluation of the various feed formulations indicates that there are several ingredients, which are common for the four species, and will therefore be used for further elaboration on the organic availability. These feed ingredients, are: fishmeal and oil, corn meal, wheat meal, blood meal, vitamin mix, mineral mix, and antioxidants. Besides the evaluation of the feed ingredients an inventory was made on the demands set by three key organic standards and legislation documents; European legislation (in prep), IFOAM and Naturland. A draft consensus standard containing a synthesis of all demands has been described. The implication of the demands, and the possibilities and bottlenecks for organic feed production were evaluated for the selected feed ingredients. It was concluded that organic feed production for RAS can meet the general criteria set for feed, on GMO material and organic composition. However, for the production of organic feed, a bottleneck will be the necessary requirement of synthetic amino acids for health improvement. The lack of these amino acids in organic feed can result in potential disadvantage for animal needs. This raw material restrictions will most likely also result in the lack of possibilities for fine tuning the feed for animal need
    corecore