18,955 research outputs found

    Inverse design and implementation of a wavelength demultiplexing grating coupler

    Get PDF
    Nanophotonics has emerged as a powerful tool for manipulating light on chips. Almost all of today's devices, however, have been designed using slow and ineffective brute-force search methods, leading in many cases to limited device performance. In this article, we provide a complete demonstration of our recently proposed inverse design technique, wherein the user specifies design constraints in the form of target fields rather than a dielectric constant profile, and in particular we use this method to demonstrate a new demultiplexing grating. The novel grating, which has not been developed using conventional techniques, accepts a vertical-incident Gaussian beam from a free-space and separates O-band (1300nm)(1300\mathrm{nm}) and C-band (1550nm)(1550\mathrm{nm}) light into separate waveguides. This inverse design concept is simple and extendable to a broad class of highly compact devices including frequency splitters, mode converters, and spatial mode multiplexers.Comment: 17 pages, 4 figures, 1 table. A supplementary section describing the inverse-design algorithm in detail has been added, in addition to minor corrections and updated reference

    Analysis of the Downstream-Collusive Effect in Vertical Mergers

    Get PDF
    The downstream-collusion effect is one of the possible impacts on competition after a vertical merger. However, little legal and economic literature has discussed this topic thoroughly. Therefore, this Article first delves into analyzing the harm of the downstream-collusive effect. By using game-theoretic models, we find that the scale of the saved unit cost or downstream cost and the level of heterogeneity between the downstream firms’ final goods could affect the incentives of downstream-collusive behavior. Next, we integrate the concepts derived from the models into the Vertical Merger Guidelines and the burdenshifting framework. This economic concept should aid antitrust agencies in assessing the viability of bringing vertical merger challenges with some proof of downstream-collusive behavior. Finally, we address our critiques of the AT&T–Time Warner merger case and take it as an example to demonstrate how to apply the updated burden-shifting framework to a real-world merger case. This should aid federal courts in understanding how to analyze the downstream-collusive effect in future vertical merger cases

    Precise Particle Tracking Against a Complicated Background: Polynomial Fitting with Gaussian Weight

    Full text link
    We present a new particle tracking software algorithm designed to accurately track the motion of low-contrast particles against a background with large variations in light levels. The method is based on a polynomial fit of the intensity around each feature point, weighted by a Gaussian function of the distance from the centre, and is especially suitable for tracking endogeneous particles in the cell, imaged with bright field, phase contrast or fluorescence optical microscopy. Furthermore, the method can simultaneously track particles of all different sizes, and allows significant freedom in their shape. The algorithm is evaluated using the quantitative measures of accuracy and precision of previous authors, using simulated images at variable signal-to-noise ratios. To these we add a new test of the error due to a non-uniform background. Finally the tracking of particles in real cell images is demonstrated. The method is made freely available for non-commencial use as a software package with a graphical user-inferface, which can be run within the Matlab programming environment

    Incorporating published univariable associations in diagnostic and prognostic modeling

    Get PDF
    Background: Diagnostic and prognostic literature is overwhelmed with studies reporting univariable predictor-outcome associations. Currently, methods to incorporate such information in the construction of a prediction model are underdeveloped and unfamiliar to many researchers. Methods. This article aims to improve upon an adaptation method originally proposed by Greenland (1987) and Steyerberg (2000) to incorporate previously published univariable associations in the construction of a novel prediction model. The proposed method improves upon the variance estimation component by reconfiguring the adaptation process in established theory and making it more robust. Different variants of the proposed method were tested in a simulation study, where performance was measured by comparing estimated associations with their predefined values according to the Mean Squared Error and coverage of the 90% confidence intervals. Results: Results demonstrate that performance of estimated multivariable associations considerably improves for small datasets where external evidence is included. Although the error of estimated associations decreases with increasing amount of individual participant data, it does not disappear completely, even in very large datasets. Conclusions: The proposed method to aggregate previously published univariable associations with individual participant data in the construction of a novel prediction models outperforms established approaches and is especially worthwhile when relatively limited individual participant data are available

    Contact Interactions and Resonance-Like Physics at Present and Future Colliders from Unparticles

    Get PDF
    High scale conformal physics can lead to unusual unparticle stuff at our low energies. In this paper we discuss how the exchange of unparticles between Standard Model fields can lead to new contact interaction physics as well as a pseudoresonance-like structure, an unresonance, that might be observable at the Tevatron or LHC in, e.g., the Drell-Yan channel. The specific signatures of this scenario are quite unique and can be used to easily identify this new physics given sufficient integrated luminosity.Comment: 20 pages, 10 figs; minor text changes, ref added; typos correcte

    Axial form factor of the nucleon in the perturbative chiral quark model

    Full text link
    We apply the perturbative chiral quark model (PCQM) at one loop to analyze the axial form factor of the nucleon. This chiral quark model is based on an effective Lagrangian, where baryons are described by relativistic valence quarks and a perturbative cloud of Goldstone bosons as dictated by chiral symmetry. We apply the formalism to obtain analytical expressions for the axial form factor of the nucleon, which is given in terms of fundamental parameters of low-energy pion-nucleon physics (weak pion decay constant, strong pion-nucleon form factor) and of only one model parameter (radius of the nucleonic three-quark core).Comment: 23 pages, 5 figures, accepted for publication in J. Phys.

    A Dynamic Programming Approach to Adaptive Fractionation

    Get PDF
    We conduct a theoretical study of various solution methods for the adaptive fractionation problem. The two messages of this paper are: (i) dynamic programming (DP) is a useful framework for adaptive radiation therapy, particularly adaptive fractionation, because it allows us to assess how close to optimal different methods are, and (ii) heuristic methods proposed in this paper are near-optimal, and therefore, can be used to evaluate the best possible benefit of using an adaptive fraction size. The essence of adaptive fractionation is to increase the fraction size when the tumor and organ-at-risk (OAR) are far apart (a "favorable" anatomy) and to decrease the fraction size when they are close together. Given that a fixed prescribed dose must be delivered to the tumor over the course of the treatment, such an approach results in a lower cumulative dose to the OAR when compared to that resulting from standard fractionation. We first establish a benchmark by using the DP algorithm to solve the problem exactly. In this case, we characterize the structure of an optimal policy, which provides guidance for our choice of heuristics. We develop two intuitive, numerically near-optimal heuristic policies, which could be used for more complex, high-dimensional problems. Furthermore, one of the heuristics requires only a statistic of the motion probability distribution, making it a reasonable method for use in a realistic setting. Numerically, we find that the amount of decrease in dose to the OAR can vary significantly (5 - 85%) depending on the amount of motion in the anatomy, the number of fractions, and the range of fraction sizes allowed. In general, the decrease in dose to the OAR is more pronounced when: (i) we have a high probability of large tumor-OAR distances, (ii) we use many fractions (as in a hyper-fractionated setting), and (iii) we allow large daily fraction size deviations.Comment: 17 pages, 4 figures, 1 tabl
    • …
    corecore