43 research outputs found

    성능과 용량 향상을 위한 적층형 메모리 구조

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 융합과학기술대학원 융합과학부(지능형융합시스템전공), 2019. 2. 안정호.The advance of DRAM manufacturing technology slows down, whereas the density and performance needs of DRAM continue to increase. This desire has motivated the industry to explore emerging Non-Volatile Memory (e.g., 3D XPoint) and the high-density DRAM (e.g., Managed DRAM Solution). Since such memory technologies increase the density at the cost of longer latency, lower bandwidth, or both, it is essential to use them with fast memory (e.g., conventional DRAM) to which hot pages are transferred at runtime. Nonetheless, we observe that page transfers to fast memory often block memory channels from servicing memory requests from applications for a long period. This in turn significantly increases the high-percentile response time of latency-sensitive applications. In this thesis, we propose a high-density managed DRAM architecture, dubbed 3D-XPath for applications demanding both low latency and high capacity for memory. 3D-XPath DRAM stacks conventional DRAM dies with high-density DRAM dies explored in this thesis and connects these DRAM dies with 3D-XPath. Especially, 3D-XPath allows unused memory channels to service memory requests from applications when primary channels supposed to handle the memory requests are blocked by page transfers at given moments, considerably increasing the high-percentile response time. This can also improve the throughput of applications frequently copying memory blocks between kernel and user memory spaces. Our evaluation shows that 3D-XPath DRAM decreases high-percentile response time of latency-sensitive applications by ∼30% while improving the throughput of an I/O-intensive applications by ∼39%, compared with DRAM without 3D-XPath. Recent computer systems are evolving toward the integration of more CPU cores into a single socket, which require higher memory bandwidth and capacity. Increasing the number of channels per socket is a common solution to the bandwidth demand and to better utilize these increased channels, data bus width is reduced and burst length is increased. However, this longer burst length brings increased DRAM access latency. On the memory capacity side, process scaling has been the answer for decades, but cell capacitance now limits how small a cell could be. 3D stacked memory solves this problem by stacking dies on top of other dies. We made a key observation in real multicore machine that multiple memory controllers are always not fully utilized on SPEC CPU 2006 rate benchmark. To bring these idle channels into play, we proposed memory channel sharing architecture to boost peak bandwidth of one memory channel and reduce the burst latency on 3D stacked memory. By channel sharing, the total performance on multi-programmed workloads and multi-threaded workloads improved up to respectively 4.3% and 3.6% and the average read latency reduced up to 8.22% and 10.18%.DRAM 제조 기술의 발전은 속도가 느려지는 반면 DRAM의 밀도 및 성능 요구는 계속 증가하고 있다. 이러한 요구로 인해 새로운 비 휘발성 메모리(예: 3D-XPoint) 및 고밀도 DRAM(예: Managed asymmetric latency DRAM Solution)이 등장하였다. 이러한 고밀도 메모리 기술은 긴 레이턴시, 낮은 대역폭 또는 두 가지 모두를 사용하는 방식으로 밀도를 증가시키기 때문에 성능이 좋지 않아, 핫 페이지를 고속 메모리(예: 일반 DRAM)로 스왑되는 저용량의 고속 메모리가 동시에 사용되는 것이 일반적이다. 이러한 스왑 과정에서 빠른 메모리로의 페이지 전송이 일반적인 응용프로그램의 메모리 요청을 오랫동안 처리하지 못하도록 하기 때문에, 대기 시간에 민감한 응용 프로그램의 백분위 응답 시간을 크게 증가시켜, 응답 시간의 표준 편차를 증가시킨다. 이러한 문제를 해결하기 위해 본 학위 논문에서는 저 지연시간 및 고용량 메모리를 요구하는 애플리케이션을 위해 3D-XPath, 즉 고밀도 관리 DRAM 아키텍처를 제안한다. 이러한 3D-톔소를 집적한 DRAM은 저속의 고밀도 DRAM 다이를 기존의 일반적인 DRAM 다이와 동시에 한 칩에 적층하고, DRAM 다이끼리는 제안하는 3D-XPath 하드웨어를 통해 연결된다. 이러한 3D-XPath는 핫 페이지 스왑이 일어나는 동안 응용프로그램의 메모리 요청을 차단하지 않고 사용량이 적은 메모리 채널로 핫 페이지 스왑을 처리 할 수 있도록 하여, 데이터 집중 응용 프로그램의 백분위 응답 시간을 개선시킨다. 또한 제안하는 하드웨어 구조를 사용하여, 추가적으로 O/S 커널과 유저 스페이스 간의 메모리 블록을 자주 복사하는 응용 프로그램의 처리량을 향상시킬 수 있다. 이러한 3D-XPath DRAM은 3D-XPath가 없는 DRAM에 비해 I/O 집약적인 응용프로그램의 처리량을 최대 39 % 향상시키면서 레이턴시에 민감한 응용 프로그램의 높은 백분위 응답 시간을 최대 30 %까지 감소시킬 수 있다. 또한 최근의 컴퓨터 시스템은 보다 많은 메모리 대역폭과 용량을 필요로하는 더 많은 CPU 코어를 단일 소켓으로 통합하는 방향으로 진화하고 있다. 이러한 소켓 당 채널 수를 늘리는 것은 대역폭 요구에 대한 일반적인 해결책이며, 최신의 DRAM 인터페이스의 발전 양상은 증가한 채널을 보다 잘 활용하기 위해 데이터 버스 폭이 감소되고 버스트 길이가 증가한다. 그러나 길어진 버스트 길이는 DRAM 액세스 대기 시간을 증가시킨다. 추가적으로 최신의 응용프로그램은 더 많은 메모리 용량을 요구하며, 미세 공정으로 메모리 용량을 증가시키는 방법론은 수십 년 동안 사용되었지만, 20 nm 이하의 미세공정에서는 더 이상 공정 미세화를 통해 메모리 밀도를 증가시키기가 어려운 상황이며, 적층형 메모리를 사용하여 용량을 증가시키는 방법을 사용한다. 이러한 상황에서, 실제 최신의 멀티코어 머신에서 SPEC CPU 2006 응용프로그램을 멀티코어에서 실행하였을 때, 항상 시스템의 모든 메모리 컨트롤러가 완전히 활용되지 않는다는 사실을 관찰했다. 이러한 유휴 채널을 사용하기 위해 하나의 메모리 채널의 피크 대역폭을 높이고 3D 스택 메모리의 버스트 대기 시간을 줄이기 위해 본 학위 논문에서는 메모리 채널 공유 아키텍처를 제안하였으며, 하드웨어 블록을 제안하였다. 이러한 채널 공유를 통해 멀티 프로그램 된 응용프로그램 및 다중 스레드 응용프로그램 성능이 각각 4.3 % 및 3.6 %로 향상되었으며 평균 읽기 대기 시간은 8.22 % 및 10.18 %로 감소하였다.Contents Abstract i Contents iv List of Figures vi List of Tables viii Introduction 1 1.1 3D-XPath: High-Density Managed DRAM Architecture with Cost-effective Alternative Paths for Memory Transactions 5 1.2 Boosting Bandwidth – Dynamic Channel Sharing on 3D Stacked Memory 9 1.3 Research contribution 13 1.4 Outline 14 3D-stacked Heterogeneous Memory Architecture with Cost-effective Extra Block Transfer Paths 17 2.1 Background 17 2.1.1 Heterogeneous Main Memory Systems 17 2.1.2 Specialized DRAM 19 2.1.3 3D-stacked Memory 22 2.2 HIGH-DENSITY DRAM ARCHITECTURE 27 2.2.1 Key Design Challenges 29 2.2.2 Plausible High-density DRAM Designs 33 2.3 3D-STACKED DRAM WITH ALTERNATIVE PATHS FOR MEMORY TRANSACTIONS 37 2.3.1 3D-XPath Architecture 41 2.3.2 3D-XPath Management 46 2.4 EXPERIMENTAL METHODOLOGY 52 2.5 EVALUATION 56 2.5.1 OLDI Workloads 56 2.5.2 Non-OLDI Workloads 61 2.5.3 Sensitivity Analysis 66 2.6 RELATED WORK 70 Boosting bandwidth –Dynamic Channel Sharing on 3D Stacked Memory 72 3.1 Background: Memory Operations 72 3.1.1. Memory Controller 72 3.1.2 DRAM column access sequence 73 3.2 Related Work 74 3.3. CHANNEL SHARING ENABLED MEMORY SYSTEM 76 3.3.1 Hardware Requirements 78 3.3.2 Operation Sequence 81 3.4 Analysis 87 3.4.1 Experiment Environment 87 3.4.2 Performance 88 3.4.3 Overhead 90 CONCLUSION 92 REFERENCES 94 국문초록 107Docto

    Evaluating the impact of future memory technologies in the design of multicore processors

    Get PDF
    "It’s the Memory, Stupid!" In 1996, Richard Sites, one of the fathers of Computer Architecture and lead designer of the DEC alpha, wrote a paper [36] with the title above. In that paper he realized that the only important design issue for microprocessors in the next decade would be the memory subsystems design. After more than a decade later, the community of researchers started to digest and internalize this quote. Now, after more than two decades, it can be said that a lot of progress has been done since 1996 but the expectations of the enormous data sets that software is going to handle in the followings years tells that more aggressive designs are needed. Another reason new memory technologies are needed is because of the multicore architecture which has increased the required memory bandwidth. This architecture completely extended across the main computer sectors was the result of continuing the Moore’s law in exchange of adding more difficulties for software and hardware developers. All of this has promoted this project. First, it has been decided to create a bridge of the state of the art DRAM simulator Ramulator [30] with the micro-architecture simulator TaskSim [34]. Once the bridge has been completed, the second goal of this project has been to make an evaluation of the impact of the current and future memory technologies in multicore architectures. As a first approach, this new infrastructure has been used to evaluate the behavior of several parallel applications concluding that the execution time of the applications varies significantly across different memory technologies which even increase the differences while simulating different processors. The doubtless winner among all the memory technologies evaluated has been HBM which in some cases has achieved the best expected memory cycle response time."És la Memòria, Estúpid!" El 1996, Richard Sites, un dels pares de l’Arquitectura de Computadors i dissenyador principal del DEC Alpha, va escriure un article [36] amb el títol anterior. En aquest article es va adonar que l’única qüestió important per al disseny de microprocessadors en la dècada següent seria el disseny de subsistemes de memòria. Després de més d’una dècada, la comunitat d’investigadors va començar a digerir i assimilar aquesta cita. Ara, després de més de dues dècades, es pot dir que un gran progrés s’ha fet des de 1996, però les expectatives dels enormes conjunts de dades que el software utilitzarà en els següents anys indica que es necessiten dissenys més agressius. Una altra raó per el qual es necessiten noves tecnologies de memòria és degut a les arquitectures multinucli que augmenten l’ample de banda de memòria requerit. Aquesta arquitectura completament estesa a través dels sectors principals va ser el resultat de seguir la llei de Moore a canvi d’afegir més dificultats per als desenvolupadors de software i hardware. Tot això ha promogut aquest projecte. En primer lloc, s’ha decidit crear un pont sobre el capdavanter simulador de DRAM Ramulator [30] amb un de micro-arquitectura TaskSim [34]. Una vegada que s’ha completat el pont, el segon objectiu d’aquest projecte ha estat fer una avaluació de l’impacte de les tecnologies de memòria actuals i futures en les arquitectures multinucli. Com a primera aproximació, s’ha utilitzat aquesta nova infraestructura per avaluar el comportament de diverses aplicacions paral·leles arribant a la conclusió que el temps d’execució de les aplicacions varia significativament entre diferents tecnologies de memòria, que fins i tot augmenten les diferències amb la simulació de diferents processadors. El guanyador, sens dubte, entre totes les tecnologies de memòria ha estat HBM que en alguns casos ha aconseguit el millor temps de cicle de memòria esperat."Es la Memoria, Estupido!" En 1996, Richard Sites, uno de los padres de la Arquitectura de Computadores y diseñador principal del DEC alpha, escribió un artículo [36] con el título anterior. En ese artículo se dio cuenta de que el único problema de diseño importante para los microprocesadores en la próxima década sería el diseño del subsistema de memoria. Una década más tarde, la comunidad de investigadores comenzó a digerir e interiorizar esta cita. Ahora, después de más de dos décadas, se puede decir que se han hecho muchos progresos desde 1996, pero las expectativas de los enormes conjuntos de datos que el software va a manejar en los próximos años indican que se necesitan diseños más agresivos. Otra razón por la que se necesitan nuevas tecnologías de memoria es debido a la arquitectura multinúcleo que aumenta el ancho de banda de memoria requerido. Esta arquitectura completamente extendida a través de los principales sectores fue el resultado de continuar la ley de Moore a cambio de añadir más dificultades para los desarrolladores de software y hardware. Todo esto ha promovido este proyecto. En primer lugar, se ha decidido crear un puente entre el potente simulador de DRAM Ramulator [30] con uno de micro-arquitecura TaskSim [34]. Una vez que el puente se ha completado, el segundo objetivo de este proyecto ha sido hacer una evaluación del impacto de las tecnologías de memoria actuales y futuras en arquitecturas multinúcleo. Como primera aproximación, se ha utilizado esta nueva infraestructura para evaluar varias aplicaciones paralelas llegando a la conclusión de que el tiempo de ejecución de las aplicaciones varía significativamente entre las diferentes tecnologías de memoria y que incluso aumentan las diferencias al simular diferentes procesadores. El ganador sin duda entre todas las tecnologías de memoria evaluadas ha sido HBM que en algunos casos ha logrado el mejor tiempo de ciclo de memoria de respuesta esperado

    CoMeT: An Integrated Interval Thermal Simulation Toolchain for 2D, 2.5 D, and 3D Processor-Memory Systems

    Get PDF
    Processing cores and the accompanying main memory working in tandem enable the modern processors. Dissipating heat produced from computation, memory access remains a significant problem for processors. Therefore, processor thermal management continues to be an active research topic. Most thermal management research takes place using simulations, given the challenges of measuring temperature in real processors. Since core and memory are fabricated on separate packages in most existing processors, with the memory having lower power densities, thermal management research in processors has primarily focused on the cores. Memory bandwidth limitations associated with 2D processors lead to high-density 2.5D and 3D packaging technology. 2.5D packaging places cores and memory on the same package. 3D packaging technology takes it further by stacking layers of memory on the top of cores themselves. Such packagings significantly increase the power density, making processors prone to heating. Therefore, mitigating thermal issues in high-density processors (packaged with stacked memory) becomes an even more pressing problem. However, given the lack of thermal modeling for memories in existing interval thermal simulation toolchains, they are unsuitable for studying thermal management for high-density processors. To address this issue, we present CoMeT, the first integrated Core and Memory interval Thermal simulation toolchain. CoMeT comprehensively supports thermal simulation of high- and low-density processors corresponding to four different core-memory configurations - off-chip DDR memory, off-chip 3D memory, 2.5D, and 3D. CoMeT supports several novel features that facilitate overlying system research. Compared to an equivalent state-of-the-art core-only toolchain, CoMeT adds only a ~5% simulation-time overhead. The source code of CoMeT has been made open for public use under the MIT license.Comment: https://github.com/marg-tools/CoMe

    Architectural Techniques to Enable Reliable and Scalable Memory Systems

    Get PDF
    High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even building the next Exascale supercomputer. To ensure even a bare-minimum level of reliability, present-day solutions tend to have high performance, power and area overheads. Ideally, we would like memory systems to remain robust, scalable, and implementable while keeping the overheads to a minimum. This dissertation describes how simple cross-layer architectural techniques can provide orders of magnitude higher reliability and enable seamless scalability for memory systems while incurring negligible overheads.Comment: PhD thesis, Georgia Institute of Technology (May 2017

    Memory Hierarchy Design for Next Generation Scalable Many-core Platforms

    Get PDF
    Performance and energy consumption in modern computing platforms is largely dominated by the memory hierarchy. The increasing computational power in the multiprocessors and accelerators, and the emergence of the data-intensive workloads (e.g. large-scale graph traversal and scientific algorithms) requiring fast transfer of large volumes of data, are two main trends which intensify this problem by putting even higher pressure on the memory hierarchy. This increasing gap between computation speed and data transfer speed is commonly referred as the “memory wall” problem. With the emergence of heterogeneous Three Dimensional (3D) Integration based on through-silicon-vias (TSV), this situation has started to recover in the past years. On one hand, it is now possible to improve memory access bandwidth and/or latency by either stacking memories directly on top of processors or through abstracted memory interfaces such as Micron’s Hybrid Memory Cube (HMC). On the other hand, near memory computation has become worthy of revisiting due to the cost-effective integration of logic and memory in 3D stacks. These two directions bring about several interesting opportunities including performance improvement, energy and cost reduction, product miniaturization, and modular design for improved time to market. In this research, we study the effectiveness of the 3D integration technology and the optimization opportunities which it can provide in the different layers of the memory hierarchy in cluster-based many-core platforms ranging from intra-cluster L1 to inter-cluster L2 scratchpad memories (SPMs), as well as the main memory. In addition, by moving a part of the computation to where data resides, in the 3D-stacked memory context, we demonstrate further energy and performance improvement opportunities

    CoMeT: An Integrated Interval Thermal Simulation Toolchain for 2D, 2.5D, and 3D Processor-Memory Systems

    Get PDF
    Processing cores and the accompanying main memory working in tandem enable modern processors. Dissipating heat produced from computation remains a significant problem for processors. Therefore, the thermal management of processors continues to be an active subject of research. Most thermal management research is performed using simulations, given the challenges in measuring temperatures in real processors. Fast yet accurate interval thermal simulation toolchains remain the research tool of choice to study thermal management in processors at the system level. However, the existing toolchains focus on the thermal management of cores in the processors, since they exhibit much higher power densities than memory. The memory bandwidth limitations associated with 2D processors lead to high-density 2.5D and 3D packaging technology: 2.5D packaging technology places cores and memory on the same package; 3D packaging technology takes it further by stacking layers of memory on the top of cores themselves. These new packagings significantly increase the power density of the processors, making them prone to overheating. Therefore, mitigating thermal issues in high-density processors (packaged with stacked memory) becomes even more pressing. However, given the lack of thermal modeling for memories in existing interval thermal simulation toolchains, they are unsuitable for studying thermal management for high-density processors. To address this issue, we present the first integrated Core and Memory interval Thermal (CoMeT) simulation toolchain. CoMeT comprehensively supports thermal simulation of high- and low-density processors corresponding to four different core-memory (integration) configurations-off-chip DDR memory, off-chip 3D memory, 2.5D, and 3D. CoMeT supports several novel features that facilitate overlying system research. CoMeT adds only an additional similar to 5% simulation-time overhead compared to an equivalent state-of-the-art core-only toolchain. The source code of CoMeT has been made open for public use under the MIT license

    Variation-Tolerant Non-Uniform 3D Cache Management in Memory Stacked Multi-Core Processors

    Get PDF
    Process variations in integrated circuits have significant impact on their performance, leakage and stability. This is particularly evident in large, regular and dense structures such as DRAMs. DRAMs are built using minimized transistors with presumably uniform speed in an organized array structure. Process variation can introduce latency disparity among different memory arrays. With the proliferation of 3D stacking technology, DRAMs become a favorable choice for stacking on top of a multi-core processor as a last level cache for large capacity, high bandwidth, and low power. Hence, variations in bank speed create a unique problem of non-uniform cache accesses in the 3D space.In this thesis, we investigate cache management techniques for tolerating process variation in a 3D DRAM stacked onto a multi-core processor. We modeled the process variation in a 4-layer DRAM memory to characterize the latency variations among different banks. As a result, the notion of fast and slow banks from the core's standpoint is no longer associated with their physical distances with the banks. They are determined by the different bank latencies due to process variation. We develop cache migration schemes that utilize fast banks while limiting the cost due to migration. Our experiments show that there is a great performance benefit in exploiting fast memory banks through migration. On average, a variation-aware management can improve the performance of a workload over the baseline (where the speed of the slowest bank is assumed for all banks) by 17.8%. We are also only 0.45% away in performance from an ideal memory where no PV is present

    Exploring Adaptive Implementation of On-Chip Networks

    Get PDF
    As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.Siirretty Doriast

    Coordinated management of the processor and memory for optimizing energy efficiency

    Get PDF
    Energy efficiency is a key design goal for future computing systems. With diverse components interacting with each other on the System-on-Chip (SoC), dynamically managing performance, energy and temperature is a challenge in 2D architectures and more so in a 3D stacked environment. Temperature has emerged as the parameter of primary concern. Heuristics based schemes have been employed so far to address these issues. Looking ahead into the future, complex multiphysics interactions between performance, energy and temperature reveal the limitations of such approaches. Therefore in this thesis, first, a comprehensive characterization of existing methods is carried out to identify causes for their inefficiency. Managing different components in an independent and isolated fashion using heuristics is seen to be the primary drawback. Following this, techniques based on feedback control theory to optimize the energy efficiency of the processor and memory in a coordinated fashion are developed. They are evaluated on a real physical system and a cycle-level simulator demonstrating significant improvements over prior schemes. The two main messages of this thesis are, (i) coordination between multiple components is paramount for next generation computing systems and (ii) temperature ought to be treated as a resource like compute or memory cycles.Ph.D

    Doctor of Philosophy in Computing

    Get PDF
    dissertationThe demand for main memory capacity has been increasing for many years and will continue to do so. In the past, Dynamic Random Access Memory (DRAM) process scaling has enabled this increase in memory capacity. Along with continued DRAM scaling, the emergence of new technologies like 3D-stacking, buffered Dual Inline Memory Modules (DIMMs), and crosspoint nonvolatile memory promise to continue this trend in the years ahead. However, these technologies will bring with them their own gamut of problems. In this dissertation, I look at the problems facing these technologies from a current delivery perspective. 3D-stacking increases memory capacity available per package, but the increased current requirement means that more pins on the package have to be now dedicated to provide Vdd/Vss, hence increasing cost. At the system level, using buffered DIMMs to increase the number of DRAM ranks increases the peak current requirements of the system if all the DRAM chips in the system are Refreshed simultaneously. Crosspoint memories promise to greatly increase bit densities but have long read latencies because of sneak currents in the cross-bar. In this dissertation, I provide architectural solutions to each of these problems. We observe that smart data placement by the architecture and the Operating System (OS) is a vital ingredient in all of these solutions. We thereby mitigate major bottlenecks in these technologies, hence enabling higher memory densities
    corecore