1,890 research outputs found

    Energy-Efficient and Reliable Computing in Dark Silicon Era

    Get PDF
    Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    Autonomous Acquisition of Natural Situated Communication

    Get PDF
    An important part of human intelligence, both historically and operationally, is our ability to communicate. We learn how to communicate, and maintain our communicative skills, in a society of communicators – a highly effective way to reach and maintain proficiency in this complex skill. Principles that might allow artificial agents to learn language this way are in completely known at present – the multi-dimensional nature of socio-communicative skills are beyond every machine learning framework so far proposed. Our work begins to address the challenge of proposing a way for observation-based machine learning of natural language and communication. Our framework can learn complex communicative skills with minimal up-front knowledge. The system learns by incrementally producing predictive models of causal relationships in observed data, guided by goal-inference and reasoning using forward-inverse models. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime TV-style interview, using multimodal communicative gesture and situated language to talk about recycling of various materials and objects. S1 can learn multimodal complex language and multimodal communicative acts, a vocabulary of 100 words forming natural sentences with relatively complex sentence structure, including manual deictic reference and anaphora. S1 is seeded only with high-level information about goals of the interviewer and interviewee, and a small ontology; no grammar or other information is provided to S1 a priori. The agent learns the pragmatics, semantics, and syntax of complex utterances spoken and gestures from scratch, by observing the humans compare and contrast the cost and pollution related to recycling aluminum cans, glass bottles, newspaper, plastic, and wood. After 20 hours of observation S1 can perform an unscripted TV interview with a human, in the same style, without making mistakes

    An optimization-based control strategy for energy efficiency of discrete manufacturing systems

    Get PDF
    In order to reduce the global energy consumption and avoid highest power peaks during operation of manufacturing systems, an optimization-based controller for selective switching on/off of peripheral devices in a test bench that emulates the energy consumption of a periodic system is proposed. First, energy consumption models for the test-bench devices are obtained based on data and subspace identification methods. Next, a control strategy is designed based on both optimization and receding horizon approach, considering the energy consumption models, operating constraints, and the real processes performed by peripheral devices. Thus, a control policy based on dynamical models of peripheral devices is proposed to reduce the energy consumption of the manufacturing systems without sacrificing the productivity. Afterward, the proposed strategy is validated in the test bench and comparing to a typical rule-based control scheme commonly used for these manufacturing systems. Based on the obtained results, reductions near 7% could be achieved allowing improvements in energy efficiency via minimization of the energy costs related to nominal power purchased.Peer ReviewedPostprint (author's final draft

    Development Of A Data Concept For An Algorithm To Enable Relay Traffic For Trucks

    Get PDF
    In road haulage, transports are interrupted by truck drivers to comply with driving and rest times. On long-distance routes, these interruptions lead to a considerable increase in transport time. Transport interruption can be avoided by so-called relay traffic: a vehicle (e. g. semi-trailer) is handed over to a rested driver at the end of the driving time. This type of transport requires a certain company size. In Germany, however, transport companies have 11 employees on average. Intra-company relay traffic is therefore not economically viable for most transport companies. To organize an intermodal transport across forwarding companies, long-distance routes need to be split into partial routes to divide them between freight forwarders and carriers. This paper presents a data concept for an algorithm to find the best possible route sections along a previously defined start and endpoint. The developed data concept includes order-specific data, forwarder-specific data, real-time traffic data, geographical data as well as data from freight forwarding software and telematics to be the basis for the route sectioning algorithm. In this paper, different data sources, external services and logistic systems are analyzed and evaluated. It is shown which data is needed and what the best ways are to select and derive this data from the different data sources
    corecore