3,550 research outputs found
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
The emergence of a new supercomputer architecture
"July 1990".Includes bibliographical references (p. 25-26).Alan N. Afuah, James M. Utterback
A review of High Performance Computing foundations for scientists
The increase of existing computational capabilities has made simulation
emerge as a third discipline of Science, lying midway between experimental and
purely theoretical branches [1, 2]. Simulation enables the evaluation of
quantities which otherwise would not be accessible, helps to improve
experiments and provides new insights on systems which are analysed [3-6].
Knowing the fundamentals of computation can be very useful for scientists, for
it can help them to improve the performance of their theoretical models and
simulations. This review includes some technical essentials that can be useful
to this end, and it is devised as a complement for researchers whose education
is focused on scientific issues and not on technological respects. In this
document we attempt to discuss the fundamentals of High Performance Computing
(HPC) [7] in a way which is easy to understand without much previous
background. We sketch the way standard computers and supercomputers work, as
well as discuss distributed computing and discuss essential aspects to take
into account when running scientific calculations in computers.Comment: 33 page
Memcomputing: a computing paradigm to store and process information on the same physical platform
In present day technology, storing and processing of information occur on
physically distinct regions of space. Not only does this result in space
limitations; it also translates into unwanted delays in retrieving and
processing of relevant information. There is, however, a class of two-terminal
passive circuit elements with memory, memristive, memcapacitive and
meminductive systems -- collectively called memelements -- that perform both
information processing and storing of the initial, intermediate and final
computational data on the same physical platform. Importantly, the states of
these memelements adjust to input signals and provide analog capabilities
unavailable in standard circuit elements, resulting in adaptive circuitry, and
providing analog massively-parallel computation. All these features are
tantalizingly similar to those encountered in the biological realm, thus
offering new opportunities for biologically-inspired computation. Of particular
importance is the fact that these memelements emerge naturally in nanoscale
systems, and are therefore a consequence and a natural by-product of the
continued miniaturization of electronic devices. We will discuss the various
possibilities offered by memcomputing, discuss the criteria that need to be
satisfied to realize this paradigm, and provide an example showing the solution
of the shortest-path problem and demonstrate the healing property of the
solution path.Comment: The first part of this paper has been published in Nature Physics 9,
200-202 (2013). The second part has been expanded and is now included in
arXiv:1304.167
Many is beautiful : commoditization as a source of disruptive innovation
Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2003.Includes bibliographical references (leaves 44-45).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.The expression "disruptive technology" is now firmly embedded in the modern business lexicon. The mental model summarized by this concise phrase has great explanatory power for ex-post analysis of many revolutionary changes in business. Unfortunately, this paradigm can rarely be applied prescriptively. The classic formulation of a "disruptive technology" sheds little light on potential sources of innovation. This thesis seeks to extend this analysis by suggesting that many important disruptive technologies arise from commodities. The sudden availability of a high performance factor input at a low price often enables innovation in adjacent market segments. The thesis suggests main five reasons that commodities spur innovation: ** The emergence of a commodity collapses competition to the single dimension of price. Sudden changes in factor prices create new opportunities for supply driven innovation. Low prices enable innovators to substitute quantity for quality. ** The price / performance curve of a commodity creates an attractor that promotes demand aggregation. ** Commodities emerge after the establishment of a dominant design. Commodities have defined and stable interfaces. Well developed tool sets and experienced developer communities are available to work with commodities, decreasing the price of experimentation. ** Distributed architectures based on large number of simple, redundant components offer more predictable performance. Systems based on a small number of high performance components will have a higher standard deviation for uptime than high granularity systems based on large numbers of low power components. ** Distributed architectures are much more flexible than low granularity systems. Large integrated facilities often provide cost advantages when operating at the Minimum Efficient Scale of production. However, distributed architectures that can efficiently change production levels over time may be a superior solution based on the ability to adapt to changing market demand patterns. The evolution of third generation bus architectures in personal computers provides a comprehensive example of commodity based disruption, incorporating all five forces.by Richard Ellert Willey.S.M.M.O.T
The role of graphics super-workstations in a supercomputing environment
A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms)
- …