7,750 research outputs found

    When Do WOM Codes Improve the Erasure Factor in Flash Memories?

    Full text link
    Flash memory is a write-once medium in which reprogramming cells requires first erasing the block that contains them. The lifetime of the flash is a function of the number of block erasures and can be as small as several thousands. To reduce the number of block erasures, pages, which are the smallest write unit, are rewritten out-of-place in the memory. A Write-once memory (WOM) code is a coding scheme which enables to write multiple times to the block before an erasure. However, these codes come with significant rate loss. For example, the rate for writing twice (with the same rate) is at most 0.77. In this paper, we study WOM codes and their tradeoff between rate loss and reduction in the number of block erasures, when pages are written uniformly at random. First, we introduce a new measure, called erasure factor, that reflects both the number of block erasures and the amount of data that can be written on each block. A key point in our analysis is that this tradeoff depends upon the specific implementation of WOM codes in the memory. We consider two systems that use WOM codes; a conventional scheme that was commonly used, and a new recent design that preserves the overall storage capacity. While the first system can improve the erasure factor only when the storage rate is at most 0.6442, we show that the second scheme always improves this figure of merit.Comment: to be presented at ISIT 201

    LightNVM: The Linux Open-Channel SSD Subsystem

    Get PDF

    The use of oral history and narrative research in broadening the historical foundation of the agricultural communication field

    Get PDF
    The historical foundation of the agricultural communication community (consisting of both academics and the profession) is shallow and void of humanistic perspective, and there is a minimal amount of historical content that focuses on academic and professional history. The need to explore and interpret historical dimensions of this field is vital to further development of the discipline as an academic and professional field. Oral history was utilized to capture and preserve the interview content from a small sample of agricultural communication and Extension professionals and faculty while narrative research, interpretative theory, and constructivism were utilized to further understand and interpret their oral history data generated from the interviews. This process includes exploring the oral history transcribed from interviews, and then coded utilizing initial coding to identify themes. Numerous themes then emerged, but my research centers on three themes: women in the agricultural communication field, departmental mergers, and technology. Interpretative theory and constructivism were further utilized to explore themes that emerged during the transcription process. From there, narrative research was applied, and the purpose of narrative research is to further understand and explore perspective. It was important to see how narrative research can reshape the current historical narrative in order to include these perspectives which have either been ignored, misinterpreted, or unknown. The oral history content has also provided for a tremendous amount of new interesting and provoking historically-related content that can be discussed, debated, and utilized in numerous capacities

    HPC memory systems: Implications of system simulation and checkpointing

    Get PDF
    The memory system is a significant contributor for most of the current challenges in computer architecture: application performance bottlenecks and operational costs in large data-centers as HPC supercomputers. With the advent of emerging memory technologies, the exploration for novel designs on the memory hierarchy for HPC systems is an open invitation for computer architecture researchers to improve and optimize current designs and deployments. System simulation is the preferred approach to perform architectural explorations due to the low cost to prototype hardware systems, acceptable performance estimates, and accurate energy consumption predictions. Despite the broad presence and extensive usage of system simulators, their validation is not standardized; either because the main purpose of the simulator is not meant to mimic real hardware, or because the design assumptions are too narrow on a particular computer architecture topic. This thesis provides the first steps for a systematic methodology to validate system simulators when compared to real systems. We unveil real-machine´s micro-architectural parameters through a set of specially crafted micro-benchmarks. The unveiled parameters are used to upgrade the simulation infrastructure in order to obtain higher accuracy in the simulation domain. To evaluate the accuracy on the simulation domain, we propose the retirement factor, an extension to a well-known application´s performance methodology. Our proposal provides a new metric to measure the impact simulator´s parameter-tuning when looking for the most accurate configuration. We further present the delay queue, a modification to the memory controller that imposes a configurable delay for all memory transactions that reach the main memory devices; evaluated using the retirement factor, the delay queue allows us to identify the sources of deviations between the simulator infrastructure and the real system. Memory accesses directly affect application performance, both in the real-world machine as well as in the simulation accuracy. From single-read access to a unique memory location up to simultaneous read/write operations to a single or multiple memory locations, HPC applications memory usage differs from workload to workload. A property that allows to glimpse on the application´s memory usage is the workload´s memory footprint. In this work, we found a link between HPC workload´s memory footprint and simulation performance. Actual trends on HPC data-center memory deployments and current HPC application’s memory footprint led us to envision an opportunity for emerging memory technologies to include them as part of the reliability support on HPC systems. Emerging memory technologies such as 3D-stacked DRAM are getting deployed in current HPC systems but in limited quantities in comparison with standard DRAM storage making them suitable to use for low memory footprint HPC applications. We exploit and evaluate this characteristic enabling a Checkpoint-Restart library to support a heterogeneous memory system deployed with an emerging memory technology. Our implementation imposes negligible overhead while offering a simple interface to allocate, manage, and migrate data sets between heterogeneous memory systems. Moreover, we showed that the usage of an emerging memory technology it is not a direct solution to performance bottlenecks; correct data placement and crafted code implementation are critical when comes to obtain the best computing performance. Overall, this thesis provides a technique for validating main memory system simulators when integrated in a simulation infrastructure and compared to real systems. In addition, we explored a link between the workload´s memory footprint and simulation performance on current HPC workloads. Finally, we enabled low memory footprint HPC applications with resilience support while transparently profiting from the usage of emerging memory deployments.El sistema de memoria es el mayor contribuidor de los desafíos actuales en el campo de la arquitectura de ordenadores como lo son los cuellos de botella en el rendimiento de las aplicaciones, así como los costos operativos en los grandes centros de datos. Con la llegada de tecnologías emergentes de memoria, existe una invitación para que los investigadores mejoren y optimicen las implementaciones actuales con novedosos diseños en la jerarquía de memoria. La simulación de los ordenadores es el enfoque preferido para realizar exploraciones de arquitectura debido al bajo costo que representan frente a la realización de prototipos físicos, arrojando estimaciones de rendimiento aceptables con predicciones precisas. A pesar del amplio uso de simuladores de ordenadores, su validación no está estandarizada ya sea porque el propósito principal del simulador no es imitar al sistema real o porque las suposiciones de diseño son demasiado específicas. Esta tesis proporciona los primeros pasos hacia una metodología sistemática para validar simuladores de ordenadores cuando son comparados con sistemas reales. Primero se descubren los parámetros de microarquitectura en la máquina real a través de un conjunto de micro-pruebas diseñadas para actualizar la infraestructura de simulación con el fin de mejorar la precisión en el dominio de la simulación. Para evaluar la precisión de la simulación, proponemos "el factor de retiro", una extensión a una conocida herramienta para medir el rendimiento de las aplicaciones, pero enfocada al impacto del ajuste de parámetros en el simulador. Además, presentamos "la cola de retardo", una modificación virtual al controlador de memoria que agrega un retraso configurable a todas las transacciones de memoria que alcanzan la memoria principal. Usando el factor de retiro, la cola de retraso nos permite identificar el origen de las desviaciones entre la infraestructura del simulador y el sistema real. Todos los accesos de memoria afectan directamente el rendimiento de la aplicación. Desde el acceso de lectura a una única localidad memoria hasta operaciones simultáneas de lectura/escritura a una o varias localidades de memoria, una propiedad que permite reflejar el uso de memoria de la aplicación es su "huella de memoria". En esta tesis encontramos un vínculo entre la huella de memoria de las aplicaciones de alto desempeño y su rendimiento en simulación. Las tecnologías de memoria emergentes se están implementando en sistemas de alto desempeño en cantidades limitadas en comparación con la memoria principal haciéndolas adecuadas para su uso en aplicaciones con baja huella de memoria. En este trabajo, habilitamos y evaluamos el uso de un sistema de memoria heterogéneo basado en un sistema emergente de memoria. Nuestra implementación agrega una carga despreciable al mismo tiempo que ofrece una interfaz simple para ubicar, administrar y migrar datos entre sistemas de memoria heterogéneos. Además, demostramos que el uso de una tecnología de memoria emergente no es una solución directa a los cuellos de botella en el desempeño. La implementación es fundamental a la hora de obtener el mejor rendimiento ya sea ubicando correctamente los datos, o bien diseñando código especializado. En general, esta tesis proporciona una técnica para validar los simuladores respecto al sistema de memoria principal cuando se integra en una infraestructura de simulación y se compara con sistemas reales. Además, exploramos un vínculo entre la huella de memoria de la carga de trabajo y el rendimiento de la simulación en cargas de trabajo de aplicaciones de alto desempeño. Finalmente, habilitamos aplicaciones de alto desempeño con soporte de resiliencia mientras que se benefician de manera transparente con el uso de un sistema de memoria emergente.Postprint (published version

    Adult Age Differences In Recall And Reading Times For Prose

    Get PDF
    One strategy used by good college-age readers is to spend additional time viewing or reading information which is relevant to their goals or purpose in reading. Additional viewing time of goal-relevant information then presumably leads to superior retention of this information, at the expense of information which is irrelevant to the reader\u27s goals. One way to detect different strategies used by younger and older readers is to measure how much viewing time readers allot to goal-relevant information and how much of this information is recalled. Relevant information can be designated as that material which contains answers to previously-memorized questions or it can be defined as the text segments which are intrinsically most important to the theme of the text. This study was designed to measure the impact of age upon the higher-level control and monitoring processes necessary for effective prose comprehension. In the first experiment, twenty-four college-age subjects and twenty-four elderly subjects, classified as high or low in verbal ability, read two passages and answered questions about them. In the treatment condition, questions were known beforehand. In the control condition, no questions were given before reading the story. Inspection times were recorded for all subjects while they read at their own rate. Results showed that both younger and older readers spent more time viewing information relevant to their goal. All subjects also recalled more goal-relevant than irrelevant information. In the second experiment, the same forty-eight subjects read two passages one idea unit at a time. They then orally recalled the story. Inspection times were recorded for each segment of the text. Results revealed that both younger and older readers spent more time viewing information relevant to the theme of the passage. All subjects also recalled text segments as a function of that segment\u27s importance to the theme of the passage. Results are discussed as lending support to the hypothesis that older readers are adaptive and flexible information processors, able to vary strategies to obtain the desired reading goal. Thus, there do not seem to be adult age differences in at least some metacognitive skills. However, adults showed lower overall recall and slower overall reading time. Slower verbal coding speed leading to a smaller effective processing capacity is consistent with the obtained results and is discussed as a possible explanation for the observed age-related memory decline. Implications of this research and possible future directions of research in this area are also discussed

    Alzheimer’s Disease and Other Dementias Workgroup: Alzheimer’s Disease and Other Dementias Report and Recommendations

    Get PDF
    Rates of Alzheimer’s disease and other dementias are expected to increase greatly over the next decades. Many practices lack guidelines on how to increase quality of diagnosing, treating, and supporting people with dementia and their family members and other caregivers. This workgroup met from January to November 2017, aligned with and built off the Alzheimer’s State Plan, and organized recommendations with the following focus areas: Early detection and appropriate diagnosis Ongoing care and support or management including for family members and caregivers Advance care planning and palliative care Assessment and planning for need for increased support and/or higher levels of care Preparing for potential hospitalization Screening for delirium risk during hospitalization for all patients over 6

    Performance and Microarchitectural Analysis for Image Quality Assessment

    Get PDF
    This thesis presents performance analysis for five matured Image Quality Assessment algorithms: VSNR, MAD, MSSIM, BLIINDS, and VIF, using the VTune ... from Intel. The main performance parameter considered is execution time. First, we conduct Hotspot Analysis to find the most time consuming sections for the five algorithms. Second, we perform Microarchitecural Analysis to analyze the behavior of the algorithms for Intel's Sandy Bridge microarchitecture to find architectural bottlenecks. Existing research for improving the performance of IQA algorithms is based on advanced signal processing techniques. Our research focuses on the interaction of IQA algorithms with the underlying hardware and architectural resources. We propose techniques to improve performance using coding techniques that exploit the hardware resources and consequently improve the execution time and computational performance. Along with software tuning methods, we also propose a generic custom IQA hardware engine based on the microarchitectural analysis and the behavior of these five IQA algorithms with the underlying microarchitectural resources.School of Electrical & Computer Engineerin

    When a Patch is Not Enough - HardFails: Software-Exploitable Hardware Bugs

    Full text link
    In this paper, we take a deep dive into microarchitectural security from a hardware designer's perspective by reviewing the existing approaches to detect hardware vulnerabilities during the design phase. We show that a protection gap currently exists in practice that leaves chip designs vulnerable to software-based attacks. In particular, existing verification approaches fail to detect specific classes of vulnerabilities, which we call HardFails: these bugs evade detection by current verification techniques while being exploitable from software. We demonstrate such vulnerabilities in real-world SoCs using RISC-V to showcase and analyze concrete instantiations of HardFails. Patching these hardware bugs may not always be possible and can potentially result in a product recall. We base our findings on two extensive case studies: the recent Hack@DAC 2018 hardware security competition, where 54 independent teams of researchers competed world-wide over a period of 12 weeks to catch inserted security bugs in SoC RTL designs, and an in-depth systematic evaluation of state-of-the-art verification approaches. Our findings indicate that even combinations of techniques will miss high-impact bugs due to the large number of modules with complex interdependencies and fundamental limitations of current detection approaches. We also craft a real-world software attack that exploits one of the RTL bugs from Hack@DAC that evaded detection and discuss novel approaches to mitigate the growing problem of cross-layer bugs at design time

    Patients\u27 Perceptions of Quality of Life and Resource Availability After Critical Illness

    Get PDF
    Physical, psychological, and social debilities are common among survivors of critical illness. Survivors of critical illness require rehabilitative services during recovery in order to return to functional independence, but the structure and access of such services remains unclear. The purpose of this qualitative study was to explore the vital issues affecting quality of life from the perspective of critical illness survivors and to understand these patients\u27 experiences with rehabilitative services in the United States. The theoretical framework guiding this study was Weber\u27s rational choice theory, and a phenomenological study design was employed. The research questions focused on the survivors\u27 experiences with rehabilitative services following critical illness and post-intensive care unit quality of life. Participants were recruited using purposeful sampling. A researcher developed instrument was used to conduct 12 semistructured interviews in central North Carolina. Data from the interviews were coded for thematic analysis. The findings identified that aftercare lacked unity, was limited by disparate information, and overuses informal caregivers. In addition, survivors\u27 recovery depended on being prepared for post-intensive care unit life, access to recovery specific support structures, and the survivors\u27 ability to adapt to a new normalcy. Survivors experienced gratitude for being saved, which empowered them to embrace new life priorities. The implications for social change include improved understanding of urgently needed health care policies to provide essential therapies and services required to support intensive care unit survivors on their journey to recovery
    • …
    corecore