7,294 research outputs found

    Market Segmentation and the Sources of Rents from Innovation: Personal Computers in the Late 1980's

    Get PDF
    This paper evaluates the sources of transitory market power in the market for personal computers (PCs) during the late 1980's. Our analysis is motivated by the coexistence of low entry barriers into the PC industry and high rates of innovative investment by a small number of PC manufacturers. We attempt to understand these phenomena by measuring the role that different principles of product differentiation (PDs) played in segmenting this dynamic market. Our first PD measures the substitutability between Frontier (386-based) and Non- Frontier products, while the second PD measures the advantage of a brand-name reputation (e.g., by IBM). Building on advances in the measurement of product differentiation, we measure the separate roles that these PDs played in contributing to transitory market power. In so doing, this paper attempts to account for the market origins of innovative rents in the PC industry. Our principal finding is that, during the late 1980's, the PC market was highly segmented along both the Branded (B versus NB) and Frontier (F versusNF) dimensions. The effects of competitive events in any one cluster were confined mostly to that particular cluster, with little effect on other clusters. For example, less than 5% of the market share achieved by a hypothetical entrant would be market-stealing from other clusters. In addition, the product diffe- rentiation advantages of B and F were qualitatively different. The main advantage of F was limited to the isolation from NF competitors it provided; Brandedness both shifted out the product demand curve as well as segmenting B products from NB competition. These results help explain how transitory market power (arising from market segmentation) shaped the underlying incen- tives for innovation in the PC industry during the mid to late 1980s.

    Using MCD-DVS for dynamic thermal management performance improvement

    Get PDF
    With chip temperature being a major hurdle in microprocessor design, techniques to recover the performance loss due to thermal emergency mechanisms are crucial in order to sustain performance growth. Many techniques for power reduction in the past and some on thermal management more recently have contributed to alleviate this problem. Probably the most important thermal control technique is dynamic voltage and frequency scaling (DVS) which allows for almost cubic reduction in power with worst-case performance penalty only linear. So far, DVS techniques for temperature control have been studied at the chip level. Finer grain DVS is feasible if a globally-asynchronous locally-synchronous (GALS) design style is employed. GALS, also known as multiple-clock domain (MCD), allows for an independent voltage and frequency control for each one of the clock domains that are part of the chip. There are several studies on DVS for GALS that aim to improve energy and power efficiency but not temperature. This paper proposes and analyses the usage of DVS at the domain level to control temperature in a clustered MCD microarchitecture with the goal of improving the performance of applications that do not meet the thermal constraints imposed by the designers.Peer ReviewedPostprint (published version

    ReDO: Cross-Layer Multi-Objective Design-Exploration Framework for Efficient Soft Error Resilient Systems

    Get PDF
    Designing soft errors resilient systems is a complex engineering task, which nowadays follows a cross-layer approach. It requires a careful planning for different fault-tolerance mechanisms at different system's layers: starting from the technology up to the software domain. While these design decisions have a positive effect on the reliability of the system, they usually have a detrimental effect on its size, power consumption, performance and cost. Design space exploration for cross-layer reliability is therefore a multi-objective search problem in which reliability must be traded-off with other design dimensions. This paper proposes a cross-layer multi-objective design space exploration algorithm developed to help designers when building soft error resilient electronic systems. The algorithm exploits a system-level Bayesian reliability estimation model to analyze the effect of different cross-layer combinations of protection mechanisms on the reliability of the full system. A new heuristic based on the extremal optimization theory is used to efficiently explore the design space. An extended set of simulations shows the capability of this framework when applied both to benchmark applications and realistic systems, providing optimized systems that outperform those obtained by applying state-of-the-art cross-layer reliability techniques

    Modeling of CMOS devices and circuits on flexible ultrathin chips

    Get PDF
    The field of flexible electronics is rapidly evolving. The ultrathin chips are being used to address the high-performance requirements of many applications. However, simulation and prediction of changes in response of device/circuit due to bending induced stress remains a challenge as of lack of suitable compact models. This makes circuit designing for bendable electronics a difficult task. This paper presents advances in this direction, through compressive and tensile stress studies on transistors and simple circuits such as inverters with different channel lengths and orientations of transistors on ultrathin chips. Different designs of devices and circuits in a standard CMOS 0.18-μm technology were fabricated in two separated chips. The two fabricated chips were thinned down to 20 μm using standard dicing-before-grinding technique steps followed by post-CMOS processing to obtain sufficient bendability (20-mm bending radius, or 0.05% nominal strain). Electrical characterization was performed by packaging the thinned chip on a flexible substrate. Experimental results show change of carrier mobilities in respective transistors, and switching threshold voltage of the inverters during different bending conditions (maximum percentage change of 2% for compressive and 4% for tensile stress). To simulate these changes, a compact model, which is a combination of mathematical equations and extracted parameters from BSIM4, has been developed in Verilog-A and compiled into Cadence Virtuoso environment. The proposed model predicts the mobility variations and threshold voltage in compressive and tensile bending stress conditions and orientations, and shows an agreement with the experimental measurements (1% for compressive and 0.6% for tensile stress mismatch)

    Cross layer reliability estimation for digital systems

    Get PDF
    Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost. One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern. Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults. For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability. This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains

    Autonomous spacecraft maintenance study group

    Get PDF
    A plan to incorporate autonomous spacecraft maintenance (ASM) capabilities into Air Force spacecraft by 1989 is outlined. It includes the successful operation of the spacecraft without ground operator intervention for extended periods of time. Mechanisms, along with a fault tolerant data processing system (including a nonvolatile backup memory) and an autonomous navigation capability, are needed to replace the routine servicing that is presently performed by the ground system. The state of the art fault handling capabilities of various spacecraft and computers are described, and a set conceptual design requirements needed to achieve ASM is established. Implementations for near term technology development needed for an ASM proof of concept demonstration by 1985, and a research agenda addressing long range academic research for an advanced ASM system for 1990s are established

    Satellite on-board processing for earth resources data

    Get PDF
    Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented
    corecore