1,009 research outputs found

    RAPPID: an asynchronous instruction length decoder

    Get PDF
    Journal ArticleThis paper describes an investigation of potential advantages and risks of applying an aggressive asynchronous design methodology to Intel Architecture. RAPPID ("Revolving Asynchronous Pentium® Processor Instruction Decoder"), a prototype IA32 instruction length decoding and steering unit, was implemented using self-timed techniques. RAPPID chip was fabricated on a 0.25m CMOS process and tested successfully. Results show significant advantages-in particular, performance of 2.5-4.5 instructions/nS-with manageable risks using this design technology. RAPPID achieves three times the throughput and half the latency, dissipating only half the power and requiring about the same area as an existing 400MHz clocked circuit

    An asynchronous instruction length decoder

    Get PDF
    Journal ArticleAbstract-This paper describes an investigation of potential advantages and pitfalls of applying an asynchronous design methodology to an advanced microprocessor architecture. A prototype complex instruction set length decoding and steering unit was implemented using self-timed circuits. [The Revolving Asynchronous Pentium® Processor Instruction Decoder (RAPPID) design implemented the complete Pentium II® 32-bit MMX instruction set.] The prototype chip was fabricated on a 0.25-CMOS process and tested successfully. Results show significant advantages-in particular, performance of 2.5-4.5 instructions per nanosecond-with manageable risks using this design technology. The prototype achieves three times the throughput and half the latency, dissipating only half the power and requiring about the same area as the fastest commercial 400-MHz clocked circuit fabricated on the same process

    A sudden presentation of abdominal compartment syndrome

    Get PDF
    Dear Editor, Abdominal compartment syndrome (ACS) is defined as sustained intra-abdominal pressure (IAP) exceeding 20 mm Hg, which causes end-organ damage due to impaired tissue perfusion, as with other compartment syndromes [1, 2]. This dysfunction can extend beyond the abdomen to other organs like the heart and lungs. ACS is most commonly caused by trauma or surgery to the abdomen. It is characterised by interstitial oedema, which can be exacerbated by large fluid shifts during massive transfusion of blood products and other fluid resuscitation [3]. Normally, IAP is nearly equal to or slightly above ambient pressure. Intra-abdominal hypertension is typically defined as abdominal pressure greater than or equal to 12 mm Hg [4]. Initially, the abdomen is able to distend to accommodate the increase in pressure caused by oedema; however, IAP becomes highly sensitive to any additional volume once maximum distension is reached. This is a function of abdominal compliance, which plays a key role in the development and progression of intra-abdominal hypertension [5]. Surgical decompression is required in severe cases of organ dysfunction – usually when IAPs are refractory to other treatment options [6]. Excessive abdominal pressure leads to systemic pathophysiological consequences that may warrant admission to a critical care unit. These include hypoventilation secondary to restriction of the deflection of the diaphragm, which results in reduced chest wall compliance. This is accompanied by hypoxaemia, which is exacerbated by a decrease in venous return. Combined, these consequences lead to decreased cardiac output, a V/Q mismatch, and compromised perfusion to intra-abdominal organs, most notably the kidneys [7]. Kidney damage can be prerenal due to renal vein or artery compression, or intrarenal due to glomerular compression [8] – both share decreased urine output as a manifestation. Elevated bladder pressure is also seen from compression due to increased abdominal pressure, and its measurement, via a Foley catheter, is a diagnostic hallmark. Sustained intra-bladder pressures beyond 20 mm Hg with organ dysfunction are indicative of ACS requiring inter­vention [2, 8]. ACS is an important aetiology to consider in the differential diagnosis for signs of organ dysfunction – especially in the perioperative setting – as highlighted in the case below

    Parallel Access of Out-Of-Core Dense Extendible Arrays

    Get PDF
    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements

    Temperature measurement in the Intel® CoreTM Duo Processor

    Get PDF
    Modern CPUs with increasing core frequency and power are rapidly reaching a point that the CPU frequency and performance are limited by the amount of heat that can be extracted by the cooling technology. In mobile environment, this issue is becoming more apparent, as form factors become thinner and lighter. Often, mobile platforms trade CPU performance in order to reduce power and manage the box thermals. Most of today's high performance CPUs provide thermal sensor on the die to allow thermal management, typically in the form of analog thermal diode. Operating system algorithms and platform embedded controllers read the temperature and control the processor power. In addition to full temperature reading, some products implement digital sensors with fixed temperature threshold, intended for fail safe operation. Temperature measurements using the diode suffer some inherent inaccuracies : ? Measurement accuracy - An external device connects to the diode and performs the A/D conversion. The combination of diode behavior, electrical noise and conversion accuracy result with measurement error ? Distance to the die hot spot - Due to routing restrictions, the diode is not placed at the hottest spot on the die. The temperature difference between the diode and the hot spot varies with the workload and the reported temperature dose not accurately represent the die max temperature. This offset is increasing as power density of the CPU increase. multiple core CPUs introduce harder problem to address as the workload and the thermal distribution changes with the different active cores. ? Manufacturing temperature accuracy - Inaccuracies in the test environment induce additional temperature inaccuracy between the measured temperature vs. the actual temperature. As a result to these effects, the thermal control algorithm requires to add temperature guard bend to account for the control feedback errors. These impact the performance and reliability of the silicon. In order to address the thermal control issues, the Intel® CoreTM Duo has introduced a new digital temperature reading capability on die. Multiple thermal sensors are distributed on the die on different possible hot spots. An A/D logic built around these sensors translates the temperature into a digital value, accessible to operating system thermal control S/W, or driver based control mechanism. Providing high accuracy temperature reading requires a calibration process. During high volume manufacturing, each sensor is calibrated to provide good accuracy and linearity. The die specification and reliability limitation is defined by the hottest spot on the die. In addition the calibration of the sensor is done at the same test conditions as the specification testing. Any test control inaccuracy is eliminated because the part is guaranteed to meet specifications at max temperature, as measured by the digital thermometer. As a result, the use of integrated thermal sensor enables improved reliability and performance at high workloads while meeting specifications at ant time. In this paper we will present the implementation and calibration details of the digital thermometer. We will show some studies of the temperature distribution on die and compare traditional diode based measurement to the digital sensor implementation

    Utility of human life cost in anaesthesiology cost-benefit decisions

    Get PDF
    The United States (US) aviation industry provides a potentially useful mental model for dealing with certain cost-benefit decisions in aesthesiology. The Federal Aviation Administration (FAA), the national aviation authority of the United States, quantifies a price for the value of a human life based on the U.S. Department of Transportation’s (DOT) value of a statistical life (VSL) unit. The current VSL is around 9.6million,indexedtogrowwithconsiderationgiventoinflationandwagechangesfromthe2016baselineof9.6 million, indexed to grow with consideration given to inflation and wage changes from the 2016 baseline of 9.4 million [1]. To illustrate the concept, if the FAA estimates that 100 people are likely to die in the future given the current practice standards then the monetary cost of this loss will be 940million.TheFAAusesthisestimatedmonetaryvalueasanofficialreferencepointinitsregulatorydecisions,andtheagencypublishesindetailhowitderivestheestimatedvalue.Whenproposingnewregulations,theFAAbasesitsdecisionsoncomparisonsofthehumanlifecostassociatedwiththeexistingregulationversusthealternativecostthattheindustrystakeholderswillincursubsequenttotheadoptionoftheregulation.Inthisexample,ifthecostincurredbytheindustryismorethanthe940 million. The FAA uses this estimated monetary value as an official reference point in its regulatory decisions, and the agency publishes in detail how it derives the estimated value. When proposing new regulations, the FAA bases its decisions on comparisons of the human life cost associated with the existing regulation versus the alternative cost that the industry stakeholders will incur subsequent to the adoption of the regulation. In this example, if the cost incurred by the industry is more than the 940 million cost then the FAA will not adopt the proposed regulation and hence will not require the industry to undertake this cost

    Analysis of Trade-Off Between Power Saving and Response Time in Disk Storage Systems

    Get PDF
    It is anticipated that in the near future disk storage systems will surpass application servers and will become the primary consumer of power in the data centers. Shutting down of inactive disks is one of the more widespread solutions to save power consumption of disk systems. This solution involves spinning down or completely shutting off disks that exhibit long periods of inactivity and placing them in standby mode. A file request from a disk in standby mode will incur an I/O cost penalty as it takes time to spin up the disk before it can serve the file. In this paper, we address the problem of designing and implementing file allocation strategies on disk storage that save energy while meeting performance requirements of file retrievals. We present an algorithm for solving this problem with guaranteed bounds from the optimal solution. Our algorithm runs in O(nlogn) time where n is the number of files allocated. Detailed simulation results and experiments with real life workloads are also presented
    • …
    corecore