8,045 research outputs found
Filamentary Switching: Synaptic Plasticity through Device Volatility
Replicating the computational functionalities and performances of the brain
remains one of the biggest challenges for the future of information and
communication technologies. Such an ambitious goal requires research efforts
from the architecture level to the basic device level (i.e., investigating the
opportunities offered by emerging nanotechnologies to build such systems).
Nanodevices, or, more precisely, memory or memristive devices, have been
proposed for the implementation of synaptic functions, offering the required
features and integration in a single component. In this paper, we demonstrate
that the basic physics involved in the filamentary switching of electrochemical
metallization cells can reproduce important biological synaptic functions that
are key mechanisms for information processing and storage. The transition from
short- to long-term plasticity has been reported as a direct consequence of
filament growth (i.e., increased conductance) in filamentary memory devices. In
this paper, we show that a more complex filament shape, such as dendritic paths
of variable density and width, can permit the short- and long-term processes to
be controlled independently. Our solid-state device is strongly analogous to
biological synapses, as indicated by the interpretation of the results from the
framework of a phenomenological model developed for biological synapses. We
describe a single memristive element containing a rich panel of features, which
will be of benefit to future neuromorphic hardware systems
A Survey of Prediction and Classification Techniques in Multicore Processor Systems
In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
Embracing Visual Experience and Data Knowledge: Efficient Embedded Memory Design for Big Videos and Deep Learning
Energy efficient memory designs are becoming increasingly important, especially for applications related to mobile video technology and machine learning. The growing popularity of smart phones, tablets and other mobile devices has created an exponential demand for video applications in today?s society. When mobile devices display video, the embedded video memory within the device consumes a large amount of the total system power. This issue has created the need to introduce power-quality tradeoff techniques for enabling good quality video output, while simultaneously enabling power consumption reduction. Similarly, power efficiency issues have arisen within the area of machine learning, especially with applications requiring large and fast computation, such as neural networks. Using the accumulated data knowledge from various machine learning applications, there is now the potential to create more intelligent memory with the capability for optimized trade-off between energy efficiency, area overhead, and classification accuracy on the learning systems. In this dissertation, a review of recently completed works involving video and machine learning memories will be covered. Based on the collected results from a variety of different methods, including: subjective trials, discovered data-mining patterns, software simulations, and hardware power and performance tests, the presented memories provide novel ways to significantly enhance power efficiency for future memory devices. An overview of related works, especially the relevant state-of-the-art research, will be referenced for comparison in order to produce memory design methodologies that exhibit optimal quality, low implementation overhead, and maximum power efficiency.National Science FoundationND EPSCoRCenter for Computationally Assisted Science and Technology (CCAST
Publications of the Jet Propulsion Laboratory 1983
The Jet propulsion Laboratory (JPL) bibliography describes and indexes by primary author the externally distributed technical reporting, released during calendar year 1983, that resulted from scientific and engineering work performed, or managed, by the Jet Propulsion Laboratory. Three classes of publications are included. JPL Publication (81-,82-,83-series, etc.), in which the information is complete for a specific accomplishment, articles published in the open literature, and articles from the quarterly telecommunications and Data Acquisition (TDA) Progress Report (42-series) are included. Each collection of articles in this class of publication presents a periodic survey of current accomplishments by the Deep Space Network as well as other developments in Earth-based radio technology
- …