1,268 research outputs found

    A versatile CMOS transistor array IC for the statistical characterization of time-zero variability, RTN, BTI, and HCI

    Get PDF
    Statistical characterization of CMOS transistor variability phenomena in modern nanometer technologies is key for accurate end-of-life prediction. This paper presents a novel CMOS transistor array chip to statistically characterize the effects of several critical variability sources, such as time-zero variability (TZV), random telegraph noise (RTN), bias temperature instability (BTI), and hot-carrier injection (HCI). The chip integrates 3136 MOS transistors of both pMOS and nMOS types, with eight different sizes. The implemented architecture provides the chip with a high level of versatility, allowing all required tests and attaining the level of accuracy that the characterization of the above-mentioned variability effects requires. Another very important feature of the array is the capability of performing massively parallel aging testing, thus significantly cutting down the time for statistical characterization. The chip has been fabricated in a 1.2-V, 65-nm CMOS technology with a total chip area of 1800 x 1800 µm²

    Deep matrix factorization for trust-aware recommendation in social networks

    Get PDF
    Recent years have witnessed remarkable information overload in online social networks, and social network based approaches for recommender systems have been widely studied. The trust information in social networks among users is an important factor for improving recommendation performance. Many successful recommendation tasks are treated as the matrix factorization problems. However, the prediction performance of matrix factorization based methods largely depends on the matrixes initialization of users and items. To address this challenge, we develop a novel trust-aware approach based on deep learning to alleviate the initialization dependence. First, we propose two deep matrix factorization (DMF) techniques, i.e., linear DMF and non-linear DMF to extract features from the user-item rating matrix for improving the initialization accuracy. The trust relationship is integrated into the DMF model according to the preference similarity and the derivations of users on items. Second, we exploit deep marginalized Denoising Autoencoder (Deep-MDAE) to extract the latent representation in the hidden layer from the trust relationship matrix to approximate the user factor matrix factorized from the user-item rating matrix. The community regularization is integrated in the joint optimization function to take neighbours' effects into consideration. The results of DMF are applied to initialize the updating variables of Deep-MDAE in order to further improve the recommendation performance. Finally, we validate that the proposed approach outperforms state-of-the-art baselines for recommendation, especially for the cold-start users. © 2013 IEEE

    A versatile CMOS transistor array IC for the statistical characterization of time-zero variability, RTN, BTI, and HCI

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Statistical characterization of CMOS transistor variability phenomena in modern nanometer technologies is key for accurate end-of-life prediction. This paper presents a novel CMOS transistor array chip to statistically characterize the effects of several critical variability sources, such as time-zero variability (TZV), random telegraph noise (RTN), bias temperature instability (BTI), and hot-carrier injection (HCI). The chip integrates 3136 MOS transistors of both pMOS and nMOS types, with eight different sizes. The implemented architecture provides the chip with a high level of versatility, allowing all required tests and attaining the level of accuracy that the characterization of the above-mentioned variability effects requires. Another very important feature of the array is the capability of performing massively parallel aging testing, thus significantly cutting down the time for statistical characterization. The chip has been fabricated in a 1.2-V, 65-nm CMOS technology with a total chip area of 1800 x 1800 µm².Peer ReviewedPostprint (author's final draft

    A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research

    Full text link
    The design of algorithms that generate personalized ranked item lists is a central topic of research in the field of recommender systems. In the past few years, in particular, approaches based on deep learning (neural) techniques have become dominant in the literature. For all of them, substantial progress over the state-of-the-art is claimed. However, indications exist of certain problems in today's research practice, e.g., with respect to the choice and optimization of the baselines used for comparison, raising questions about the published claims. In order to obtain a better understanding of the actual progress, we have tried to reproduce recent results in the area of neural recommendation approaches based on collaborative filtering. The worrying outcome of the analysis of these recent works-all were published at prestigious scientific conferences between 2015 and 2018-is that 11 out of the 12 reproducible neural approaches can be outperformed by conceptually simple methods, e.g., based on the nearest-neighbor heuristics. None of the computationally complex neural methods was actually consistently better than already existing learning-based techniques, e.g., using matrix factorization or linear models. In our analysis, we discuss common issues in today's research practice, which, despite the many papers that are published on the topic, have apparently led the field to a certain level of stagnation.Comment: Source code and full results available at: https://github.com/MaurizioFD/RecSys2019_DeepLearning_Evaluatio

    Auxiliary-Path-Assisted Digital Linearization of Wideband Wireless Receivers

    Get PDF
    Wireless communication systems in recent years have aimed at increasing data rates by ensuring flexible and efficient use of the radio spectrum. The dernier cri in this field has been in the area of carrier aggregation and cognitive radio. Carrier aggregation is a major component of LTE-Advanced. With carrier aggregation, a number of separate LTE carriers can be combined, by mobile network operators, to increase peak data rates and overall network capacity. Cognitive radios, on the other hand, allow efficient spectrum usage by locating and using spatially vacant spectral bands. High monolithic integration in these application fields can be achieved by employing receiver architectures such as the wideband direct conversion receiver topology. This is advantageous from the view point of cost, power consumption and size. However, many challenges exist, of particular importance is nonlinear distortion arising from analog front-end components such as low noise amplifiers (LNA). Nonlinear distortions especially become severe when several signals of varying amplitudes are received simultaneously. In such cases, nonlinear distortions stemming from strong signals may deteriorate the reception of the weaker signals, and also impair the receiver’s spectrum sensing capabilities. Nonlinearity, usually a consequence of dynamic range limitation, degrades performance in wideband multi-operator communications systems, and it will have a notable role in future wireless communication system design. This thesis presents a digital domain linearization technique that employs a very nonlinear auxiliary receiver path for nonlinear distortion cancellation. The proposed linearization technique relies on one-time adaptively-determined linearization coefficients for cancelling nonlinear distortions. Specifically, we take a look at canceling the troublesome in-band third order intermodulation products using the proposed technique. The proposed technique can be extended to cancel out both even and higher order odd intermodulation products. Dynamic behavioral models are used to account for RF nonlinearities, including memory effects which cannot be ignored in the wideband scenario. Since the proposed linearization technique involves the use of two receiver paths, techniques for correcting phase delays between the two paths are also introduced. Simplicity is the hallmark of the proposed linearization technique. It can achieve up to +30 dBm in IIP3 performance with ADC resolution being a major performance bottleneck. It also shows strong tolerance to strong blocker nonlinearities

    Digital microfluidics on paper

    Get PDF
    This thesis is one of the first reports of digital microfluidics on paper and the first in which the chip’s circuit was screen printed unto the paper. The use of the screen printing technique, being a low cost and fast method for electrodes deposition, makes the all chip processing much more aligned with the low cost choice of paper as a substrate. Functioning chips were developed that were capable of working at as low as 50 V, performing all the digital microfluidics operations: movement, dispensing, merging and splitting of the droplets. Silver ink electrodes were screen printed unto paper substrates, covered by Parylene-C (through vapor deposition) as dielectric and Teflon AF 1600 (through spin coating) as hydrophobic layer. The morphology of different paper substrates, silver inks (with different annealing conditions) and Parylene deposition conditions were studied by optical microscopy, AFM, SEM and 3D profilometry. Resolution tests for the printing process and electrical characterization of the silver electrodes were also made. As a showcase of the applications potential of these chips as a biosensing device, a colorimetric peroxidase detection test was successfully done on chip, using 200 nL to 350 nL droplets dispensed from 1 μL drops

    Interoperability of semantics in news production

    Get PDF

    Micro- and Nanofluidics for Bionanoparticle Analysis

    Get PDF
    Bionanoparticles such as microorganisms and exosomes are recoganized as important targets for clinical applications, food safety, and environmental monitoring. Other nanoscale biological particles, includeing liposomes, micelles, and functionalized polymeric particles are widely used in nanomedicines. The recent deveopment of microfluidic and nanofluidic technologies has enabled the separation and anslysis of these species in a lab-on-a-chip platform, while there are still many challenges to address before these analytical tools can be adopted in practice. For example, the complex matrices within which these species reside in create a high background for their detection. Their small dimension and often low concentration demand creative strategies to amplify the sensing signal and enhance the detection speed. This Special Issue aims to recruit recent discoveries and developments of micro- and nanofluidic strategies for the processing and analysis of biological nanoparticles. The collection of papers will hopefully bring out more innovative ideas and fundamental insights to overcome the hurdles faced in the separation and detection of bionanoparticles
    • …
    corecore