388 research outputs found

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    PAPIERCRAFT: A PAPER-BASED INTERFACE TO SUPPORT INTERACTION WITH DIGITAL DOCUMENTS

    Get PDF
    Many researchers extensively interact with documents using both computers and paper printouts, which provide an opposite set of supports. Paper is comfortable to read from and write on, and it is flexible to be arranged in space; computers provide an efficient way to archive, transfer, search, and edit information. However, due to the gap between the two media, it is difficult to seamlessly integrate them together to optimize the user's experience of document interaction. Existing solutions either sacrifice inherent paper flexibility or support very limited digital functionality on paper. In response, we have proposed PapierCraft, a novel paper-based interface that supports rich digital facilities on paper without sacrificing paper's flexibility. By employing the emerging digital pen technique and multimodal pen-top feedback, PapierCraft allows people to use a digital pen to draw gesture marks on a printout, which are captured, interpreted, and applied to the corresponding digital copy. Conceptually, the pen and the paper form a paper-based computer, able to interact with other paper sheets and computing devices for operations like copy/paste, hyperlinking, and web searches. Furthermore, it retains the full range of paper advantages through the light-weighted, pen-paper-only interface. By combining the advantages of paper and digital media and by supporting the smooth transition between them, PapierCraft bridges the paper-computer gap. The contributions of this dissertation focus on four respects. First, to accommodate the static nature of paper, we proposed a pen-gesture command system that does not rely on screen-rendered feedback, but rather on the self-explanatory pen ink left on the paper. Second, for more interactive tasks, such as searching for keywords on paper, we explored pen-top multimodal (e.g. auditory, visual, and tactile) feedback that enhances the command system without sacrificing the inherent paper flexibility. Third, we designed and implemented a multi-tier distributed infrastructure to map pen-paper interactions to digital operations and to unify document interaction on paper and on computers. Finally, we systematically evaluated PapierCraft through three lab experiments and two application deployments in the areas of field biology and e-learning. Our research has demonstrated the feasibility, usability, and potential applications of the paper-based interface, shedding light on the design of the future interface for digital document interaction. More generally, our research also contributes to ubiquitous computing, mobile interfaces, and pen-computing

    Adding Machine Intelligence to Hybrid Memory Management

    Get PDF
    Computing platforms increasingly incorporate heterogeneous memory hardware technologies, as a way to scale application performance, memory capacities and achieve cost effectiveness. However, this heterogeneity, along with the greater irregularity in the behavior of emerging workloads, render existing hybrid memory management approaches ineffective, calling for more intelligent methods. To this end, this thesis reveals new insights, develops novel methods and contributes system-level mechanisms towards the practical integration of machine learning to hybrid memory management, boosting application performance and system resource efficiency. First, this thesis builds Kleio; a hybrid memory page scheduler with machine intelligence. Kleio deploys Recurrent Neural Networks to learn memory access patterns at a page granularity and to improve upon the selection of dynamic page migrations across the memory hardware components. Kleio cleverly focuses the machine learning on the page subset whose timely movement will reveal most application performance improvement, while preserving history-based lightweight management for the rest of the pages. In this way, Kleio bridges on average 80% of the relative existing performance gap, while laying the grounds for practical machine intelligent data management with manageable learning overheads. In addition, this thesis contributes three system-level mechanisms to further boost application performance and reduce the operational and learning overheads of machine learning-based hybrid memory management. First, this thesis builds Cori; a system-level solution for tuning the operational frequency of periodic page schedulers for hybrid memories. Cori leverages insights on data reuse times to fine tune the page migration frequency in a lightweight manner. Second, this thesis contributes Coeus; a page grouping mechanism for page schedulers like Kleio. Coeus leverages Cori’s data reuse insights to tune the granularity at which patterns are interpreted by the page scheduler and enable the training of a single Recurrent Neural Network per page cluster, reducing by 3x the model training times. The combined effects of Cori and Coeus provide 3x additional performance improvements to Kleio. Finally, this thesis proposes Cronus; an image-based page selector for page schedulers like Kleio. Cronus uses visualization to accelerate the process of selecting which page patterns should be managed with machine learning, reducing by 75x the operational overheads of Kleio. Cronus lays the foundations for future use of visualization and computer vision methods in memory management, such as image-based memory access pattern classification, recognition and prediction.Ph.D

    Assessment of earthquake-triggered landslides in Central Nepal

    Full text link
    Landslides are recurrent in Nepal due to active tectonics, high precipitation, complex topography, geology, and land use practices. Reliable landslide susceptibility maps are crucial for effective disaster management. Ongoing research has improved landslide mapping approaches, while further efforts are needed to assess inventories and enhance susceptibility mapping methods. This thesis aims to evaluate the landslides caused by the Gorkha earthquake in 2015 and develop reliable landslide susceptibility maps using statistical and geospatial techniques. There are four main objectives: (i) proposing clustering-based sampling strategies to increase the efficiency of landslide susceptibility maps over random selection methods, (ii) identifying and delineating effective landslide mapping units, (iii) proposing an innovative framework for comparing inventories and their corresponding susceptibility maps, and (iv) implementing a methodology for landslide-specific susceptibility mapping. Firstly, a comprehensive Gorkha earthquake-induced landslide inventory was initially compiled, and six unsupervised clustering algorithms were employed to generate six distinct training datasets. An additional training dataset was also prepared using a randomised approach. Among the tested algorithms, the Expectation Maximization using the Gaussian Mixture Model (EM/GMM) demonstrated the highest accuracy, confirming the importance of prioritising clustering patterns for training landslide inventory datasets. Secondly, slope units were introduced as an effective mapping unit for assessing landslides, delineating 112,674 slope unit polygons over an approximately 43,000 km2 area in Central Nepal. This is the first instance of generating such comprehensive mapping and making it publicly accessible. Thirdly, a comparison of five post-Gorkha earthquake inventories and susceptibility was conducted, revealing similarities in causative factors and map performance but variations in spatial patterns. Lastly, a rockfall inventory along two significant highways was developed as a landslide-classified inventory, and the rockfall susceptibility was evaluated. A segment-wise map with a 1 to 5 scale indicating low to high susceptibility was published for public use. This thesis proposes new approaches to landslide inventory sampling and earthquake-triggered landslide assessment. It provides publicly accessible databases for Central Nepal's slope unit map and rockfall susceptibility along the major highways. These findings can benefit researchers, planners, and policymakers to enhance risk management practices by advancing landslide assessment, particularly for earthquake-induced landslides in Central Nepal

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Towards more intelligent wireless access networks

    Get PDF

    Design and engineering of microreactor and smart-scaled flow processes

    Get PDF
    This book is a reprint of the special issue that appeared in the online open access journal Processes (ISSN 2227-9717) in 2013 (available at: http://www.mdpi.com/journal/processes/special_issues/smart-scaled_flow_processes)
    • …
    corecore