284 research outputs found

    The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    Get PDF
    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions

    RMS capacity utilisation: product family and supply chain

    Get PDF
    yesThe paper contributes to development of RMS through linkage with external stakeholders such as customers and suppliers of parts/raw materials to handle demand fluctuations that necessitate information sharing across the supply chain tiers. RMS is developed as an integrated supply chain hub for adjusting production capacity using a hybrid methodology of decision trees and Markov analysis. The proposed Markov Chain model contributes to evaluate and monitor system reconfigurations required due to changes of product families with consideration of the product life cycles. The simulation findings indicate that system productivity and financial performance in terms of the profit contribution of product-process allocation will vary over configuration stages. The capacity of an RMS with limited product families and/or limited model variants becomes gradually inoperative whilst approaching upcoming configuration stages due to the end of product life cycles. As a result, reconfiguration preparation is suggested quite before ending life cycle of an existing product in process, for switching from a product family to a new/another product family in the production range, subject to its present demand. The proposed model is illustrated through a simplified case study with given product families and transition probabilities

    Crummer/Suntrust Portfolio: Analysis and Recommendations [2009]

    Get PDF
    The first consideration was determining the amount of risk that should be taken. At present, the portfolio has a defensive position, guarding against a turbulent economy. However, as the portfolio trades only once a year, the question facing the team was when the market will rebound. If we believe the market will rebound between now and May, 2010, it would be prudent to position the portfolio more aggressively than it is currently allocated. Conversely, if the market remains uncertain, continuing a defensive position is sensible. The determination of the team was to take a more aggressive position than the portfolio has in its current form, but to approach that added risk judiciously. The team still believes there is a relationship between risk and return; however, the fiduciary responsibility to provide scholarship funding dictates that the team remain conservative. The goal is to position the portfolio for success in the event of a market recovery while also guarding against significant losses in the event of a prolonged recession. To accomplish this, each company in the portfolio has been scrutinized regarding their fundamentals, cash positions, dividend policies, and overall risk of bankruptcy. Generally speaking, only companies with strong cash positions and consistent, sustainable dividend policies have been included. Additionally, a z-statistic was evaluated for the companies in the portfolio to quantify their risk of bankruptcy, and the companies we are keeping remain fundamentally sound. The portfolio will essentially be rebalanced toward market weighting. The key growth sectors in the portfolio will be healthcare and technology, with energy also being overweighted compared to the market. Financials, will potentially selling at a value, are still risky in the team’s opinion, and that sector has been cut to slightly below market weight. Finally, the team is reallocating some money from our fixed asset portfolio back into equities. The current portfolio consists of 70% equities and 30% bonds, representing roughly $580,000 for the total portfolio. The team is shifting 5 percent from bonds to equities for a 75-25 breakdown. The bond portfolio will also have a greater allocation in corporate bonds, moving away from low-yield treasuries. If the market does not begin to recover during the next year, the portfolio is still guarded against significant losses. However, if a recovery does occur, and no action is taken this year, there will be no opportunity to trade again until May, 2010, missing significant gains that may occur during that time. The suggested allocation takes sensible risks while maintaining the fiduciary responsibility needed in managing this portfolio

    A Preemption-Based Meta-Scheduling System for Distributed Computing

    Get PDF
    This research aims at designing and building a scheduling framework for distributed computing systems with the primary objectives of providing fast response times to the users, delivering high system throughput and accommodating maximum number of applications into the systems. The author claims that the above mentioned objectives are the most important objectives for scheduling in recent distributed computing systems, especially Grid computing environments. In order to achieve the objectives of the scheduling framework, the scheduler employs arbitration of application-level schedules and preemption of executing jobs under certain conditions. In application-level scheduling, the user develops a schedule for his application using an execution model that simulates the execution behavior of the application. Since application-level scheduling can seriously impede the performance of the system, the scheduling framework developed in this research arbitrates between different application-level schedules corresponding to different applications to provide fair system usage for all applications and balance the interests of different applications. In this sense, the scheduling framework is not a classical scheduling system, but a meta-scheduling system that interacts with the application-level schedulers. Due to the large system dynamics involved in Grid computing systems, the ability to preempt executing jobs becomes a necessity. The meta-scheduler described in this dissertation employs well defined scheduling policies to preempt and migrate executing applications. In order to provide the users with the capability to make their applications preemptible, a user-level check-pointing library called SRS (Stop-Restart Software) was also developed by this research. The SRS library is different from many user-level check-pointing libraries since it allows reconfiguration of applications between migrations. This reconfiguration can be achieved by changing the processor configuration and/or data distribution. The experimental results provided in this dissertation demonstrates the utility of the metascheduling framework for distributed computing systems. And lastly, the metascheduling framework was put to practical use by building a Grid computing system called GradSolve. GradSolve is a flexible system and it allows the application library writers to upload applications with different capabilities into the system. GradSolve is also unique with respect to maintaining traces of the execution of the applications and using the traces for subsequent executions of the application

    Role based behavior analysis

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2009Nos nossos dias, o sucesso de uma empresa depende da sua agilidade e capacidade de se adaptar a condições que se alteram rapidamente. Dois requisitos para esse sucesso são trabalhadores proactivos e uma infra-estrutura ágil de Tecnologias de Informacão/Sistemas de Informação (TI/SI) que os consiga suportar. No entanto, isto nem sempre sucede. Os requisitos dos utilizadores ao nível da rede podem nao ser completamente conhecidos, o que causa atrasos nas mudanças de local e reorganizações. Além disso, se não houver um conhecimento preciso dos requisitos, a infraestrutura de TI/SI poderá ser utilizada de forma ineficiente, com excessos em algumas áreas e deficiências noutras. Finalmente, incentivar a proactividade não implica acesso completo e sem restrições, uma vez que pode deixar os sistemas vulneráveis a ameaças externas e internas. O objectivo do trabalho descrito nesta tese é desenvolver um sistema que consiga caracterizar o comportamento dos utilizadores do ponto de vista da rede. Propomos uma arquitectura de sistema modular para extrair informação de fluxos de rede etiquetados. O processo é iniciado com a criação de perfis de utilizador a partir da sua informação de fluxos de rede. Depois, perfis com características semelhantes são agrupados automaticamente, originando perfis de grupo. Finalmente, os perfis individuais são comprados com os perfis de grupo, e os que diferem significativamente são marcados como anomalias para análise detalhada posterior. Considerando esta arquitectura, propomos um modelo para descrever o comportamento de rede dos utilizadores e dos grupos. Propomos ainda métodos de visualização que permitem inspeccionar rapidamente toda a informação contida no modelo. O sistema e modelo foram avaliados utilizando um conjunto de dados reais obtidos de um operador de telecomunicações. Os resultados confirmam que os grupos projectam com precisão comportamento semelhante. Além disso, as anomalias foram as esperadas, considerando a população subjacente. Com a informação que este sistema consegue extrair dos dados em bruto, as necessidades de rede dos utilizadores podem sem supridas mais eficazmente, os utilizadores suspeitos são assinalados para posterior análise, conferindo uma vantagem competitiva a qualquer empresa que use este sistema.In our days, the success of a corporation hinges on its agility and ability to adapt to fast changing conditions. Proactive workers and an agile IT/IS infrastructure that can support them is a requirement for this success. Unfortunately, this is not always the case. The user’s network requirements may not be fully understood, which slows down relocation and reorganization. Also, if there is no grasp on the real requirements, the IT/IS infrastructure may not be efficiently used, with waste in some areas and deficiencies in others. Finally, enabling proactivity does not mean full unrestricted access, since this may leave the systems vulnerable to outsider and insider threats. The purpose of the work described on this thesis is to develop a system that can characterize user network behavior. We propose a modular system architecture to extract information from tagged network flows. The system process begins by creating user profiles from their network flows’ information. Then, similar profiles are automatically grouped into clusters, creating role profiles. Finally, the individual profiles are compared against the roles, and the ones that differ significantly are flagged as anomalies for further inspection. Considering this architecture, we propose a model to describe user and role network behavior. We also propose visualization methods to quickly inspect all the information contained in the model. The system and model were evaluated using a real dataset from a large telecommunications operator. The results confirm that the roles accurately map similar behavior. The anomaly results were also expected, considering the underlying population. With the knowledge that the system can extract from the raw data, the users network needs can be better fulfilled, the anomalous users flagged for inspection, giving an edge in agility for any company that uses it

    Overlapping of Communication and Computation and Early Binding: Fundamental Mechanisms for Improving Parallel Performance on Clusters of Workstations

    Get PDF
    This study considers software techniques for improving performance on clusters of workstations and approaches for designing message-passing middleware that facilitate scalable, parallel processing. Early binding and overlapping of communication and computation are identified as fundamental approaches for improving parallel performance and scalability on clusters. Currently, cluster computers using the Message-Passing Interface for interprocess communication are the predominant choice for building high-performance computing facilities, which makes the findings of this work relevant to a wide audience from the areas of high-performance computing and parallel processing. The performance-enhancing techniques studied in this work are presently underutilized in practice because of the lack of adequate support by existing message-passing libraries and are also rarely considered by parallel algorithm designers. Furthermore, commonly accepted methods for performance analysis and evaluation of parallel systems omit these techniques and focus primarily on more obvious communication characteristics such as latency and bandwidth. This study provides a theoretical framework for describing early binding and overlapping of communication and computation in models for parallel programming. This framework defines four new performance metrics that facilitate new approaches for performance analysis of parallel systems and algorithms. This dissertation provides experimental data that validate the correctness and accuracy of the performance analysis based on the new framework. The theoretical results of this performance analysis can be used by designers of parallel system and application software for assessing the quality of their implementations and for predicting the effective performance benefits of early binding and overlapping. This work presents MPI/Pro, a new MPI implementation that is specifically optimized for clusters of workstations interconnected with high-speed networks. This MPI implementation emphasizes features such as persistent communication, asynchronous processing, low processor overhead, and independent message progress. These features are identified as critical for delivering maximum performance to applications. The experimental section of this dissertation demonstrates the capability of MPI/Pro to facilitate software techniques that result in significant application performance improvements. Specific demonstrations with Virtual Interface Architecture and TCP/IP over Ethernet are offered

    Enhancing reliability with Latin Square redundancy on desktop grids.

    Get PDF
    Computational grids are some of the largest computer systems in existence today. Unfortunately they are also, in many cases, the least reliable. This research examines the use of redundancy with permutation as a method of improving reliability in computational grid applications. Three primary avenues are explored - development of a new redundancy model, the Replication and Permutation Paradigm (RPP) for computational grids, development of grid simulation software for testing RPP against other redundancy methods and, finally, running a program on a live grid using RPP. An important part of RPP involves distributing data and tasks across the grid in Latin Square fashion. Two theorems and subsequent proofs regarding Latin Squares are developed. The theorems describe the changing position of symbols between the rows of a standard Latin Square. When a symbol is missing because a column is removed the theorems provide a basis for determining the next row and column where the missing symbol can be found. Interesting in their own right, the theorems have implications for redundancy. In terms of the redundancy model, the theorems allow one to state the maximum makespan in the face of missing computational hosts when using Latin Square redundancy. The simulator software was developed and used to compare different data and task distribution schemes on a simulated grid. The software clearly showed the advantage of running RPP, which resulted in faster completion times in the face of computational host failures. The Latin Square method also fails gracefully in that jobs complete with massive node failure while increasing makespan. Finally an Inductive Logic Program (ILP) for pharmacophore search was executed, using a Latin Square redundancy methodology, on a Condor grid in the Dahlem Lab at the University of Louisville Speed School of Engineering. All jobs completed, even in the face of large numbers of randomly generated computational host failures

    Choosing between remote I/O versus staging in distributed environments

    Get PDF
    Today, scientifi_x000C_c applications and experiments have become increasingly complex and more demanding in terms of their computational and data requirements. The amount of data generated and used has grown at a very rapid rate. As tens or hundreds of terabytes of data for a single application is very common today; petabytes and even exabytes of data will be very common in a few years. One of the major challenges in distributed computing environments is how to access these large datasets remotely over the network. Data staging and remote I/O are the most widely used data access methods for distributed applications. Application developers generally chose one over the other intuitively without making any scienti_x000C_fic comparison specifi_x000C_c to their applications since there is no generic model available that they can use. In this thesis, we develop generic models and set guidelines for the application developers which would help them to choose the most appropriate data access method for their application. We de_x000C_fine the parameters that potentially aff_x000B_ect the end-to-end performance of the distributed applications which need to access remote data. To achieve our goal, we implement a series of synthetic benchmark applications to simulate di_x000B_fferent data access patterns. We run these benchmark applications on diff_x000B_erent distributed computing settings with di_x000B_fferent parameters, such as network bandwidth, server and client capabilities, and data access ratio. We also use di_x000B_fferent remote I/O protocols to show the importance of the protocol in making a decision. We use regression analysis to develop applicable generic models for comparing diff_x000B_erent data access methods, and test our models in a real life application. The main contribution of our thesis is generic models that can be applied to most data-intensive distributed applications to decide the best data access technique for those applications. Our models provide the scientists and application developers an opportunity to choose the best data access method before actually running the application

    From manufacturing to design : an essay on the work of Kim B. Clark. Harvard Business School Working Paper- 07-057

    Get PDF
    In this paper, we describe Clark's research and discuss his contributions to management scholarship and economics. We look at three distinct bodies of work. In the first, Clark (in conjunction with Robert Hayes and Steven Wheelwright) argued that the abandonment by U.S. managers of manufacturing as a strategic function exposed U.S. companies to Japanese competition in terms of the cost and quality of goods. In the second, conducted with Wheelwright, Bruce Chew, Takahiro Fujimoto, Kent Bowen and Marco Iansiti, Clark made the case that product development could be managed in new ways that would lead to significant competitive advantage for firms. Finally, in work conducted with Abernathy, Rebecca Henderson and Carliss Baldwin, Clark placed product and process designs at the center of his explanation of how innovation determines the structure and evolution of industries.
    corecore