74,588 research outputs found

    Auto-generation of passive scalable macromodels for microwave components using scattered sequential sampling

    Get PDF
    This paper presents a method for automatic construction of stable and passive scalable macromodels for parameterized frequency responses. The method requires very little prior knowledge to build the scalable macromodels thereby considerably reducing the burden on the designers. The proposed method uses an efficient scattered sequential sampling strategy with as few expensive simulations as possible to generate accurate macromodels for the system using state-of-the-art scalable macromodeling methods. The scalable macromodels can be used as a replacement model for the actual simulator in overall design processes. Pertinent numerical results validate the proposed sequential sampling strategy

    Global modeling approach to the design of an MMIC amplifier using Ohmic Electrode-Sharing Technology

    Get PDF
    An innovative technique for high--density, high-frequency integrated circuit design is proposed.The procedure exploits the potentialities of a global modeling approach,previously applied only at device level,enabling the circuit designer to explore flexible layout solutions imed at important reduction in chip size and cost.The new circuit design technique is presented by means of an example consisting of a wide-band amplifier,implemented with the recently proposed Ohmic Electrode-Sharing Technology (OEST).The good agreement between experimental and simulated results confirms the validity of the proposed MMIC design approach

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Scalable Bayesian Non-Negative Tensor Factorization for Massive Count Data

    Full text link
    We present a Bayesian non-negative tensor factorization model for count-valued tensor data, and develop scalable inference algorithms (both batch and online) for dealing with massive tensors. Our generative model can handle overdispersed counts as well as infer the rank of the decomposition. Moreover, leveraging a reparameterization of the Poisson distribution as a multinomial facilitates conjugacy in the model and enables simple and efficient Gibbs sampling and variational Bayes (VB) inference updates, with a computational cost that only depends on the number of nonzeros in the tensor. The model also provides a nice interpretability for the factors; in our model, each factor corresponds to a "topic". We develop a set of online inference algorithms that allow further scaling up the model to massive tensors, for which batch inference methods may be infeasible. We apply our framework on diverse real-world applications, such as \emph{multiway} topic modeling on a scientific publications database, analyzing a political science data set, and analyzing a massive household transactions data set.Comment: ECML PKDD 201

    Scalable macromodelling methodology for the efficient design of microwave filters

    Get PDF
    The complexity of the design of microwave filters increases steadily over the years. General design techniques available in literature yield relatively good initial designs, but electromagnetic (EM) optimisation is often needed to meet the specifications. Although interesting optimisation strategies exist, they depend on computationally expensive EM simulations. This makes the optimisation process time consuming. Moreover, brute force optimisation does not provide physical insights into the design and it is only applicable to one set of specifications. If the specifications change, the design and optimisation process must be redone. The authors propose a scalable macromodel-based design approach to overcome this. Scalable macromodels can be generated in an automated way. So far the inclusion of scalable macromodels in the design cycle of microwave filters has not been studied. In this study, it is shown that scalable macromodels can be included in the design cycle of microwave filters and re-used in multiple design scenarios at low computational cost. Guidelines to properly generate and use scalable macromodels in a filter design context are given. The approach is illustrated on a state-of-the-art microstrip dual-band bandpass filter with closely spaced pass bands and a complex geometrical structure. The results confirm that scalable macromodels are proper design tools and a valuable alternative to a computationally expensive EM simulator-based design flow

    Statistical framework for video decoding complexity modeling and prediction

    Get PDF
    Video decoding complexity modeling and prediction is an increasingly important issue for efficient resource utilization in a variety of applications, including task scheduling, receiver-driven complexity shaping, and adaptive dynamic voltage scaling. In this paper we present a novel view of this problem based on a statistical framework perspective. We explore the statistical structure (clustering) of the execution time required by each video decoder module (entropy decoding, motion compensation, etc.) in conjunction with complexity features that are easily extractable at encoding time (representing the properties of each module's input source data). For this purpose, we employ Gaussian mixture models (GMMs) and an expectation-maximization algorithm to estimate the joint execution-time - feature probability density function (PDF). A training set of typical video sequences is used for this purpose in an offline estimation process. The obtained GMM representation is used in conjunction with the complexity features of new video sequences to predict the execution time required for the decoding of these sequences. Several prediction approaches are discussed and compared. The potential mismatch between the training set and new video content is addressed by adaptive online joint-PDF re-estimation. An experimental comparison is performed to evaluate the different approaches and compare the proposed prediction scheme with related resource prediction schemes from the literature. The usefulness of the proposed complexity-prediction approaches is demonstrated in an application of rate-distortion-complexity optimized decoding

    Proceedings of the 2nd Computer Science Student Workshop: Microsoft Istanbul, Turkey, April 9, 2011

    Get PDF
    • 

    corecore