4,656 research outputs found

    Theory of diffusive light scattering cancellation cloaking

    Full text link
    We report on a new concept of cloaking objects in diffusive light regime using the paradigm of the scattering cancellation and mantle cloaking techniques. We show numerically that an object can be made completely invisible to diffusive photon density waves, by tailoring the diffusivity constant of the spherical shell enclosing the object. This means that photons' flow outside the object and the cloak made of these spherical shells behaves as if the object were not present. Diffusive light invisibility may open new vistas in hiding hot spots in infrared thermography or tissue imaging.Comment: 16 pages, 5 figure

    The Euclidean distance degree of smooth complex projective varieties

    Full text link
    We obtain several formulas for the Euclidean distance degree (ED degree) of an arbitrary nonsingular variety in projective space: in terms of Chern and Segre classes, Milnor classes, Chern-Schwartz-MacPherson classes, and an extremely simple formula equating the Euclidean distance degree of X with the Euler characteristic of an open subset of X

    Time-dependent transport of a localized surface plasmon through a linear array of metal nanoparticles: Precursor and normal mode contributions

    Get PDF
    We theoretically investigate the time-dependent transport of a localized surface plasmon excitation through a linear array of identical and equidistantly spaced metal nanoparticles. Two different signals propagating through the array are found: one traveling with the group velocity of the surface plasmon polaritons of the system and damped exponentially, and the other running with the speed of light and decaying in a power-~law fashion, as x−1x^{-1} and x−2x^{-2} for the transversal and longitudinal polarizations, respectively. The latter resembles the Sommerfeld-Brillouin forerunner and has not been identified in previous studies. The contribution of this signal dominates the plasmon transport at large distances. In addition, even though this signal is spread in the propagation direction and has the lateral dimension larger than the wavelength, the field profile close to the chain axis does not change with distance, indicating that this part of the signal is confined to the array.Comment: 13 pages, 10 figures, to be published in PR

    Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing

    Get PDF
    How to implement quality computing with the limited power budget is the key factor to move very large scale integration (VLSI) chip design forward. This work introduces various techniques of low power VLSI design used for state of art computing. From the viewpoint of power supply, conventional in-chip voltage regulators based on analog blocks bring the large overhead of both power and area to computational chips. Motivated by this, a digital based switchable pin method to dynamically regulate power at low circuit cost has been proposed to make computing to be executed with a stable voltage supply. For one of the widely used and time consuming arithmetic units, multiplier, its operation in logarithmic domain shows an advantageous performance compared to that in binary domain considering computation latency, power and area. However, the introduced conversion error reduces the reliability of the following computation (e.g. multiplication and division.). In this work, a fast calibration method suppressing the conversion error and its VLSI implementation are proposed. The proposed logarithmic converter can be supplied by dc power to achieve fast conversion and clocked power to reduce the power dissipated during conversion. Going out of traditional computation methods and widely used static logic, neuron-like cell is also studied in this work. Using multiple input floating gate (MIFG) metal-oxide semiconductor field-effect transistor (MOSFET) based logic, a 32-bit, 16-operation arithmetic logic unit (ALU) with zipped decoding and a feedback loop is designed. The proposed ALU can reduce the switching power and has a strong driven-in capability due to coupling capacitors compared to static logic based ALU. Besides, recent neural computations bring serious challenges to digital VLSI implementation due to overload matrix multiplications and non-linear functions. An analog VLSI design which is compatible to external digital environment is proposed for the network of long short-term memory (LSTM). The entire analog based network computes much faster and has higher energy efficiency than the digital one

    Statistical Reliability Estimation of Microprocessor-Based Systems

    Get PDF
    What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target softwar

    Overview of Hydra: a concurrent language for synchronous digital circuit design

    Get PDF
    Hydra is a computer hardware description language that integrates several kinds of software tool (simulation, netlist generation and timing analysis) within a single circuit specification. The design language is inherently concurrent, and it offers black box abstraction and general design patterns that simplify the design of circuits with regular structure. Hydra specifications are concise, allowing the complete design of a computer system as a digital circuit within a few pages. This paper discusses the motivations behind Hydra, and illustrates the system with a significant portion of the design of a basic RISC processor

    Options for Denormal Representation in Logarithmic Arithmetic

    Get PDF
    International audienceEconomical hardware often uses a FiXed-point Number System (FXNS), whose constant absolute precision is acceptable for many signal-processing algorithms. The almost-constant relative precision of the more expensive Floating-Point (FP) number system simplifies design, for example, by eliminating worries about FXNS overflow because the range of FP is much larger than FXNS for the same wordsize; however, primitive FP introduces another problem: underflow. The conventional Signed Logarithmic Number System (SLNS) offers similar range and precision as FP with much better performance (in terms of power, speed and area) for multiplication, division, powers and roots. Moderate-precision addition in SLNS uses table lookup with properties similar to FP (including underflow). This paper proposes a new number system, called the Denormal LNS (DLNS), which is a hybrid of the properties of FXNS and SLNS. The inspiration for DLNS comes from the denormal (aka subnormal) numbers found in IEEE-754 (that provide better, gradual underflow) and the ÎŒ-law often used for speech encoding; the novel DLNS circuit here allows arithmetic to be performed directly on such encoded data. The proposed approach allows customizing the range in which gradual underflow occurs. A wide gradual underflow range acts like FXNS; a narrow one acts like SLNS. The DLNS approach is most affordable for applications involving addition, subtraction and multiplication by constants, such as the Fast Fourier Transform (FFT). Simulation of an FFT application illustrates a moderate gradual underflow decreasing bit-switching activity 15% compared to underflow-free SLNS, at the cost of increasing application error by 30%. DLNS reduces switching activity 5% to 20% more than an abruptly-underflowing SLNS with one-half the error. Synthesis shows the novel circuit primarily consists of traditional SLNS addition and subtraction tables, with additional datapaths that allow the novel ALU to act on conventional SLNS as well as DLNS and mixed data, for a worst-case area overhead of 26%. For similar range and precision, simulation of Taylor-series computations suggest subnormal values in DLNS behave similarly to those in the IEEE-754 FP standard. Unlike SLNS, DLNS approach is quite costly for general (non-constant) multiplication, division and roots. To overcome this difficulty, this paper proposes two variation called Denormal Mitchell LNS (DMLNS) and Denormal Offset Mitchell LNS (DOMLNS), in which the well-known Mitchell's method makes the cost of general multiplication, division and roots closer to that of SLNS. Taylor-series computations suggest subnormal values in DMLNS and DOMLNS also behave similarly to those in the IEEE-754 FP standard. Synthesis shows that DMLNS and DOMLNS respectively have average area overheads of 25% and 17% compared to an equivalent SLNS 5-operation unit.Les circuits intĂ©grĂ©s Ă©conomiques utilisent souvent des systĂšmes de numĂ©ration en virgule fixe, dont la prĂ©cision absolue constante est acceptable pour de nombreux algorithmes de traitement du signal. La prĂ©cision relative quasi-constante du systĂšme virgule flottante, plus coĂ»teux, simplifie la conception, en Ă©liminant notamment le risque de dĂ©bordement par le haut, la dynamique du flottant Ă©tant bien plus grande qu'en virgule fixe. Cependant, le flottant primitif induit un autre problĂšme : le dĂ©bordement par le bas (underflow). Le systĂšme logarithmique conventionnel (SLNS) offre une dynamique et une prĂ©cision similaire au flottant, pour des performances bien meilleures (en termes de consommation, vitesse et surface) pour la multiplication, la division, les puissances et les racines. L'addition en prĂ©cision moyenne en SLNS est basĂ©es sur des accĂšs Ă  des tables, avec des propriĂ©tĂ©s similaires au flottant (incluant le dĂ©bordement par le bas). Cet article propose trois variations autour d'un nouveau systĂšme de reprĂ©sentation des nombres, respectivement appelĂ©es Denormal LNS (DLNS), Denormal Mitchell LNS (DMLNS) et Denormal Offset Mitchell LNS (DOMLNS), qui sont toutes des hybrides des propriĂ©tĂ©s de la virgule fixe et du SLNS. L'inspiration de D(OM)LNS vient des nombre dĂ©normaux (ou sous-normaux) de la norme IEEE-754, qui fournissent un dĂ©bordement par le bas graduel, et le codage ”-law utilisĂ© dans la transmission de la voix. Le nouveau circuit DLNS proposĂ© permet de calculer directement sur les donnĂ©es codĂ©es. L'approche proposĂ©e permet d'ajuster l'intervalle dans lequel le dĂ©bordement progressif intervient. Une plage large se comporte comme la virgule fixe, une Ă©troite comme le SLNS. L'approche DLNS est la plus Ă©conomique pour les applications impliquant des additions, soustractions et multiplications par des constantes, telles que les transformĂ©es de Fourier rapides (FFT). Notre premiĂšre mise en {\oe}uvre s'appuie sur les blocs de base existant d SLNS. Des synthĂšses montrent que le nouveau circuit est constituĂ© principalement des tables d'additions SLNS traditionnelles, avec des chemins de donnĂ©es supplĂ©mentaires qui permettent Ă  la nouvelle unitĂ© d'opĂ©rer sur des donnĂ©es SLNS, DLNS ou mixtes, pour un surcoĂ»t en surface de 26% dans le pire cas. Contrairement au SLNS, cette rĂ©alisation de DLNS reste coĂ»teuse pour la multiplication gĂ©nĂ©rique, la division et les racines. Pour surmonter cette difficultĂ©, cet article propose les variations DMLNS et DOMLNS, pour lesquelles la mĂ©thode de Mitchell rapproche le coĂ»t des multiplications gĂ©nĂ©riques, divisions et racines de leurs Ă©quivalents en SLNS. Des calculs sur des sĂ©ries de Taylor suggĂšrent que les valeurs sous-normales en DMLNS et DOMLNS se comportent Ă©galement de maniĂšre similaires Ă  celles de la norme IEEE-754. Des synthĂšses montrent que DMLNS et DOMLNS offrent des surcoĂ»ts respectifs de 25% et 17% par rapport Ă  une unitĂ© SLNS Ă  5 opĂ©rations Ă©quivalente
    • 

    corecore