256 research outputs found

    Construction and applications of the Dirichlet-to-Neumann operator in transmission line modeling

    Get PDF
    The Dirichlet-to-Neumann (DIN) operator is a useful tool in the characterization of interconnect structures. in. combination with the Method of Moments; it con. be used for the calculation, of the per-unit length transmission line parameters of multi-conductor Or to directly determine the interval impedance of conductors. This paper presents a new and fast calculation method for the DIN boundary operator in the important case of rectangular structures, based on the superposition of parallel-plate waveguide modes. Especially for its non-differential form, some numerical issues need to be addressed. It is further explained how the DtN operator can be determined for composite geometries. The theory is illustrated with some numerical examples

    Eigenmode-based capacitance calculations with applications in passivation layer design

    Get PDF
    The design of high-speed metallic interconnects such as microstrips requires the correct characterization of both the conductors and the surrounding dielectric environment, in order to accurately predict their propagation characteristics. A fast boundary integral equation approach is obtained by modeling all materials as equivalent surface charge densities in free space. The capacitive behavior of a finite dielectric environment can then be determined by means of a transformation matrix, relating these charge densities to the boundary value of the electric potential. In this paper, a new calculation method is presented for the important case that the dielectric environment is composed of homogeneous rectangles. The method, based on a surface charge expansion in terms of the Robin eigenfunctions of the considered rectangles, is not only more efficient than traditional methods, but is also more accurate, as shown in some numerical experiments. As an application, the design and behavior of a microstrip passivation layer is treated in some detail

    Named entity recognition on flemish audio-visual and news-paper archives

    Get PDF

    Lifted rule injection for relation embeddings

    Get PDF
    Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for incorporating implication rules into distributed representations for automated knowledge base construction. We map entity-tuple embeddings into an approximately Boolean space and encourage a partial ordering over relation embeddings based on implication rules mined from WordNet. Surprisingly, we find that the strong restriction of the entity-tuple embedding space does not hurt the expressiveness of the model and even acts as a regularizer that improves generalization. By incorporating few commonsense rules, we achieve an increase of 2 percentage points mean average precision over a matrix factorization baseline, while observing a negligible increase in runtime

    Frequency-dependent substrate characterization via an iterative pole search algorithm

    Get PDF
    The characterization of frequency-dependent material properties is an important issue in nowadays high-speed interconnect design. This letter presents a practical method to determine the complex permittivity of a substrate material, by combining measurements with simulations. A rational permittivity model is determined by searching for its poles and residues using an iterative optimization method. Its accuracy is verified by comparing coplanar waveguide measurements with simulations based on the new material model

    Character-level Recurrent Neural Networks in Practice: Comparing Training and Sampling Schemes

    Get PDF
    Recurrent neural networks are nowadays successfully used in an abundance of applications, going from text, speech and image processing to recommender systems. Backpropagation through time is the algorithm that is commonly used to train these networks on specific tasks. Many deep learning frameworks have their own implementation of training and sampling procedures for recurrent neural networks, while there are in fact multiple other possibilities to choose from and other parameters to tune. In existing literature this is very often overlooked or ignored. In this paper we therefore give an overview of possible training and sampling schemes for character-level recurrent neural networks to solve the task of predicting the next token in a given sequence. We test these different schemes on a variety of datasets, neural network architectures and parameter settings, and formulate a number of take-home recommendations. The choice of training and sampling scheme turns out to be subject to a number of trade-offs, such as training stability, sampling time, model performance and implementation effort, but is largely independent of the data. Perhaps the most surprising result is that transferring hidden states for correctly initializing the model on subsequences often leads to unstable training behavior depending on the dataset.Comment: 23 pages, 11 figures, 4 table

    Topical word importance for fast keyphrase extraction

    Get PDF
    We propose an improvement on a state-of-the-art keyphrase extraction algorithm, Topical PageRank (TPR), incorporating topical information from topic models. While the original algorithm requires a random walk for each topic in the topic model being used, ours is independent of the topic model, computing but a single PageRank for each text regardless of the amount of topics in the model. This increases the speed drastically and enables it for use on large collections of text using vast topic models, while not altering performance of the original algorithm
    • …
    corecore