27,539 research outputs found

    Cache Equalizer: A Cache Pressure Aware Block Placement Scheme for Large-Scale Chip Multiprocessors

    Get PDF
    This paper describes Cache Equalizer (CE), a novel distributed cache management scheme for large scale chip multiprocessors (CMPs). Our work is motivated by large asymmetry in cache sets usages. CE decouples the physical locations of cache blocks from their addresses for the sake of reducing misses caused by destructive interferences. Temporal pressure at the on-chip last-level cache, is continuously collected at a group (comprised of cache sets) granularity, and periodically recorded at the memory controller to guide the placement process. An incoming block is consequently placed at a cache group that exhibits the minimum pressure. CE provides Quality of Service (QoS) by robustly offering better performance than the baseline shared NUCA cache. Simulation results using a full-system simulator demonstrate that CE outperforms shared NUCA caches by an average of 15.5% and by as much as 28.5% for the benchmark programs we examined. Furthermore, evaluations manifested the outperformance of CE versus related CMP cache designs

    Life Cycle Cost Analysis of Pavements: State-of-the-Practice

    Get PDF
    Life Cycle Cost Analysis (LCCA) is performed by transportation agencies in the design phase of transportation projects in order to be able to implement more economical strategies, to support decision processes in pavement type selection (flexible or rigid) and also to assess the relative costs of different rehabilitation options within each type of pavement. However, most of the input parameters are inherently uncertain. In order to implement the LCCA process in a reliable and trustworthy manner, this uncertainty must be addressed. This thesis summarizes a through research that aims at improving the existing LCCA approach for South Carolina Department of Transportation (SCDOT) by developing a better understanding of the parameters used in the analysis. In order to achieve this, a comprehensive literature review was first conducted to collect information from various academic and industrial sources. After that, two surveys were conducted to survey the state-of-the-practice of LCCA across the 50 U.S. Departments of Transportation (DOTs) and Canada. The questionnaires were designed to gauge the level of LCCA activity in different states as well as to solicit information on specific approaches that each state is taking for pavement type selection. The responses obtained from the web surveys were analyzed to observe the trends regarding the various input parameters that feed into the LCCA process. The results were combined with the additional resources in order to analyze the challenges to implementing the LCCA approach. The survey results showed LCCA is used widely among transportation agencies. However, the extent of the analysis varies widely and is presented here

    Integrating personal media and digital TV with QoS guarantees using virtualized set-top boxes: architecture and performance measurements

    Get PDF
    Nowadays, users consume a lot of functionality in their home coming from a service provider located in the Internet. While the home network is typically shielded off as much as possible from the `outside world', the supplied services could be greatly extended if it was possible to use local information. In this article, an extended service is presented that integrates the user's multimedia content, scattered over multiple devices in the home network, into the Electronic Program Guide (EPG) of the Digital TV. We propose to virtualize the set-top box, by migrating all functionality except user interfacing to the service provider infrastructure. The media in the home network is discovered through standard Universal Plug and Play (UPnP), of which the QoS functionality is exploited to ensure high quality playback over the home network, that basically is out of the control of the service provider. The performance of the subsystems are analysed

    Modeling the trade-off between diet costs and methane emissions: A goal programming approach

    Get PDF
    AbstractEnteric methane emission is a major greenhouse gas from livestock production systems worldwide. Dietary manipulation may be an effective emission-reduction tool; however, the associated costs may preclude its use as a mitigation strategy. Several studies have identified dietary manipulation strategies for the mitigation of emissions, but studies examining the costs of reducing methane by manipulating diets are scarce. Furthermore, the trade-off between increase in dietary costs and reduction in methane emissions has only been determined for a limited number of production scenarios. The objective of this study was to develop an optimization framework for the joint minimization of dietary costs and methane emissions based on the identification of a set of feasible solutions for various levels of trade-off between emissions and costs. Such a set of solutions was created by the specification of a systematic grid of goal programming weights, enabling the decision maker to choose the solution that achieves the desired trade-off level. Moreover, the model enables the calculation of emission-mitigation costs imputing a trading value for methane emissions. Emission imputed costs can be used in emission-unit trading schemes, such as cap-and-trade policy designs. An application of the model using data from lactating cows from dairies in the California Central Valley is presented to illustrate the use of model-generated results in the identification of optimal diets when reducing emissions. The optimization framework is flexible and can be adapted to jointly minimize diet costs and other potential environmental impacts (e.g., nitrogen excretion). It is also flexible so that dietary costs, feed nutrient composition, and animal nutrient requirements can be altered to accommodate various production systems

    Efficiency implications of open source commonality and reuse

    Get PDF
    This paper analyzes the reuse choices made by open source developers and relates them to cost efficiency. We make a distinction between the commonality among applications and the actual reuse of code. The former represents the similarity between the requirements of different applications and, consequently, the functionalities that they provide. The latter represents the actual reuse of code. No application can be maintained for ever. A fundamental reason for the need for periodical replacement of code is the exponential growth of costs with the number of maintenance interventions. Intuitively, this is due to the increasing complexity of software that grows in both size and coupling among different modules. The paper measures commonality, reuse and development costs of 26 open-source projects for a total of 171 application versions. Results show that reuse choices in open-source contexts are not cost efficient. Developers tend to reuse code from the most recent version of applications, even if their requirements are closer to previous versions. Furthermore, the latest version of an application is always the one that has incurred the highest number of maintenance interventions. Accordingly, the development cost per new line of code is found to grow with reuse

    New technologies. Vocational Training No. 11, June 1983

    Get PDF

    The impact of tool allocation policies on selected performance measures for flexible manufacturing systems

    Get PDF
    The allocation of cutting tools to machines is an important concern for managers of flexible manufacturing systems. This research was conducted to study the impact of four tool allocation strategies on five performance measures, contingent upon three part-type selection rules. In addition, the average tool inventory and tool consumption rates were evaluated for each tool policy and selection rule. The four tool allocation policies consisted of the bulk exchange, tool migration, tool sharing, and resident tooling. The five performance measures consisted of the average flowtime of parts, the average machine utilization, the robot utilization, the percentage of parts late, and the mean lateness. Simulation was used to study the impact of the tooling strategies on the performance measures. Analysis of variance procedures, graphical comparison charts and Bonferroni multiple comparison tests were used to analyze the data. The results show that clustering tools, based on group technology, is the preferred method for allocating cutting tools to machines. Tool sharing was the preferred tool allocation strategy. Also, tool allocation policies that require tool changes, after a part\u27s machining cycle, increase part flowtimes because parts are delayed in the system due to the increase in tool changing activities. In addition, tool allocation strategies based on tool clustering methods reduced the utilization of resources. The results of this study show that bulk exchange produced lower tool consumption rates per production period during the early periods of production. During the middle and later production periods, tool sharing produced lower tool consumption rates. This study concluded that grouping tools based on the commonality of tool usage results in a lower average inventory per production period. Furthermore, this study showed that the uneven distribution of part-types to machine, under tool clustering methods, affected the average mean lateness of part-type. Moreover, no part-type selection rule outperformed another on ail performance measures. The earliest due date rule produced the lowest mean lateness values for all tool policies. Tool policies that produce low mean flowtimes may not produce low mean lateness values. Managerial implications are discussed with respect to the findings from this study. Further research is needed to evaluate flexible manufacturing systems, which include using different part-type selection rules, machine failures, and hybrids of tool allocation strategies

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures
    corecore