199,782 research outputs found
AC Breakdown Characteristics of LDPE in the Presence of Crosslinking By-products.
LDPE films of 50?m thick were soaked into crosslinking byproducts which are acetophenone, ?-methylstyrene and cumyl alcohol. The samples were used to perform the breakdown strength (Eb) of the LDPE with the byproducts chemical reside in the sample. The AC breakdown measurements were conducted at a ramp rate of 50V/s at room temperature. Weibull plot is used to analyse the ac breakdown result. Comparing the soaked and un-soaked (fresh LDPE) samples, it does show a small reduction of the eta values as the LDPE films were soaked into the sample. It suggests that the breakdown strength is reduced by adding the byproducts in the LDPE film. However, as the range of breakdown strength of all samples are to be compared, these values fall in the same region which indicate no significant difference can be seen in all samples
Effect of high resistive barrier on earthing system
Substation earthing provides a low impedance path and carries current into ground under normal and fault conditions without adversely affecting continuity of service. Under a fault condition, the ground voltage may rise to a level that may endanger the public outside the vicinity of the substation. In such a case a high resistive barrier can be inserted around the vicinity of the substation to reduce the surface potentials immediately beyond the barrier. In this paper the effect of barrier on the overall performance of the earthing system has been investigated experimentally and computationally based on an earthing system consisted of combined grid and rods in a water tank. The effect of the position and depth of the barrier to the resistance of the earthing system and surface potentials in and around the substation have been examined
Recommended from our members
Semantics-Space-Time Cube. A Conceptual Framework for Systematic Analysis of Texts in Space and Time
We propose an approach to analyzing data in which texts are associated with spatial and temporal references with the aim to understand how the text semantics vary over space and time. To represent the semantics, we apply probabilistic topic modeling. After extracting a set of topics and representing the texts by vectors of topic weights, we aggregate the data into a data cube with the dimensions corresponding to the set of topics, the set of spatial locations (e.g., regions), and the time divided into suitable intervals according to the scale of the planned analysis. Each cube cell corresponds to a combination (topic, location, time interval) and contains aggregate measures characterizing the subset of the texts concerning this topic and having the spatial and temporal references within these location and interval. Based on this structure, we systematically describe the space of analysis tasks on exploring the interrelationships among the three heterogeneous information facets, semantics, space, and time. We introduce the operations of projecting and slicing the cube, which are used to decompose complex tasks into simpler subtasks. We then present a design of a visual analytics system intended to support these subtasks. To reduce the complexity of the user interface, we apply the principles of structural, visual, and operational uniformity while respecting the specific properties of each facet. The aggregated data are represented in three parallel views corresponding to the three facets and providing different complementary perspectives on the data. The views have similar look-and-feel to the extent allowed by the facet specifics. Uniform interactive operations applicable to any view support establishing links between the facets. The uniformity principle is also applied in supporting the projecting and slicing operations on the data cube. We evaluate the feasibility and utility of the approach by applying it in two analysis scenarios using geolocated social media data for studying people's reactions to social and natural events of different spatial and temporal scales
Maximal Acceleration Corrections to the Lamb Shift of Muonic Hydrogen
The maximal acceleration corrections to the Lamb shift of muonic hydrogen are
calculated by using the relativistic Dirac wave functions. The correction for
the transition is meV and is higher than the accuracy of
present QED calculations and of the expected accuracy of experiments in
preparation.Comment: LaTex file, 9 pages, to be published in Il Nuovo Cimento
Sketch-based Influence Maximization and Computation: Scaling up with Guarantees
Propagation of contagion through networks is a fundamental process. It is
used to model the spread of information, influence, or a viral infection.
Diffusion patterns can be specified by a probabilistic model, such as
Independent Cascade (IC), or captured by a set of representative traces.
Basic computational problems in the study of diffusion are influence queries
(determining the potency of a specified seed set of nodes) and Influence
Maximization (identifying the most influential seed set of a given size).
Answering each influence query involves many edge traversals, and does not
scale when there are many queries on very large graphs. The gold standard for
Influence Maximization is the greedy algorithm, which iteratively adds to the
seed set a node maximizing the marginal gain in influence. Greedy has a
guaranteed approximation ratio of at least (1-1/e) and actually produces a
sequence of nodes, with each prefix having approximation guarantee with respect
to the same-size optimum. Since Greedy does not scale well beyond a few million
edges, for larger inputs one must currently use either heuristics or
alternative algorithms designed for a pre-specified small seed set size.
We develop a novel sketch-based design for influence computation. Our greedy
Sketch-based Influence Maximization (SKIM) algorithm scales to graphs with
billions of edges, with one to two orders of magnitude speedup over the best
greedy methods. It still has a guaranteed approximation ratio, and in practice
its quality nearly matches that of exact greedy. We also present influence
oracles, which use linear-time preprocessing to generate a small sketch for
each node, allowing the influence of any seed set to be quickly answered from
the sketches of its nodes.Comment: 10 pages, 5 figures. Appeared at the 23rd Conference on Information
and Knowledge Management (CIKM 2014) in Shanghai, Chin
Double-discharge copper-vapor laser
Power supply for discharge pulses consists of two capacitors that are made to discharge synchronously with adjustable time intervals. First pulse is switched with hydrogen thyratron, and second by spark gap. Lasing action peaks for appropriate combination of these two parameters
Size versus truthfulness in the house allocation problem
We study the House Allocation problem (also known as the Assignment problem), i.e., the problem of allocating a set of objects among a set of agents, where each agent has ordinal preferences (possibly involving ties) over a subset of the objects. We focus on truthful mechanisms without monetary transfers for finding large Pareto optimal matchings. It is straightforward to show that no deterministic truthful mechanism can approximate a maximum cardinality Pareto optimal matching with ratio better than 2. We thus consider randomized mechanisms. We give a natural and explicit extension of the classical Random Serial Dictatorship Mechanism (RSDM) specifically for the House Allocation problem where preference lists can include ties. We thus obtain a universally truthful randomized mechanism for finding a Pareto optimal matching and show that it achieves an approximation ratio of eovere-1. The same bound holds even when agents have priorities (weights) and our goal is to find a maximum weight (as opposed to maximum cardinality) Pareto optimal matching. On the other hand we give a lower bound of 18 over 13 on the approximation ratio of any universally truthful Pareto optimal mechanism in settings with strict preferences. In the case that the mechanism must additionally be non-bossy, an improved lower bound of eovere-1 holds. This lower bound is tight given that RSDM for strict preference lists is non-bossy. We moreover interpret our problem in terms of the classical secretary problem and prove that our mechanism provides the best randomized strategy of the administrator who interviews the applicants
- …