27,477 research outputs found
Mode dispersion and delay characteristics of optical waveguides using equivalent TL circuits
A new analysis leading to an exact and efficient algorithm is presented for calculating directly and without numerical differentiation the mode dispersion characteristics of cylindrical dielectric waveguides of arbitrary refractive-index profile. The new algorithm is based on the equivalent transmission-line (T-L) technique. From Maxwell's equations, we derive an equivalent T-L circuit for a cylindrical dielectric waveguide. Based on the TL-circuit model we derive exact analytic formulas for a recursive algorithm which allows direct calculation of mode delay and dispersion. We demonstrate this technique by calculating the fundamental mode dispersion for step, triangular, and linear chirp optical fiber refractive index profiles. The accuracy of the numerical results is also examined. The proposed algorithm computes dispersion directly from the propagation constant without the need for curve fitting and subsequent successive numerical differentiation. It is exact, rapidly convergent, and it results in savings for both storage memory and computing time
Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods
We introduce a framework to build a survival/risk bump hunting model with a
censored time-to-event response. Our Survival Bump Hunting (SBH) method is
based on a recursive peeling procedure that uses a specific survival peeling
criterion derived from non/semi-parametric statistics such as the
hazards-ratio, the log-rank test or the Nelson-Aalen estimator. To optimize the
tuning parameter of the model and validate it, we introduce an objective
function based on survival or prediction-error statistics, such as the log-rank
test and the concordance error rate. We also describe two alternative
cross-validation techniques adapted to the joint task of decision-rule making
by recursive peeling and survival estimation. Numerical analyses show the
importance of replicated cross-validation and the differences between criteria
and techniques in both low and high-dimensional settings. Although several
non-parametric survival models exist, none addresses the problem of directly
identifying local extrema. We show how SBH efficiently estimates extreme
survival/risk subgroups unlike other models. This provides an insight into the
behavior of commonly used models and suggests alternatives to be adopted in
practice. Finally, our SBH framework was applied to a clinical dataset. In it,
we identified subsets of patients characterized by clinical and demographic
covariates with a distinct extreme survival outcome, for which tailored medical
interventions could be made. An R package `PRIMsrc` is available on CRAN and
GitHub.Comment: Keywords: Exploratory Survival/Risk Analysis, Survival/Risk
Estimation & Prediction, Non-Parametric Method, Cross-Validation, Bump
Hunting, Rule-Induction Metho
Recommended from our members
Recursive Percentage based Hybrid Pattern Training for Supervised Learning
Supervised learning algorithms, often used to find the I/O relationship in data, have the tendency to be trapped in local optima as opposed to the desirable global optima. In this paper, we discuss the RPHP learning algorithm. The algorithm uses Real Coded Genetic Algorithm based global and local searches to find a set of pseudo global optimal solutions. Each pseudo global optimum is a local optimal solution from the point of view of all the patterns but globally optimal from the point of view of a subset of patterns. Together with RPHP, a Kth nearest neighbor algorithm is used as a second level pattern distributor to solve a test pattern. We also show theoretically the condition under which finding several pseudo global optimal solutions requires a shorter training time than finding a single global optimal solution. As the difficulty of curve fitting problems is easily estimated, we verify the capability of the RPHP algorithm against them and compare the RPHP algorithm with three counterparts to show the benefits of hybrid learning and active recursive subset selection. The RPHP shows a clear superiority in performance. We conclude our paper by identifying possible loopholes in the RPHP algorithm and proposing possible solutions
Applications of incidence bounds in point covering problems
In the Line Cover problem a set of n points is given and the task is to cover
the points using either the minimum number of lines or at most k lines. In
Curve Cover, a generalization of Line Cover, the task is to cover the points
using curves with d degrees of freedom. Another generalization is the
Hyperplane Cover problem where points in d-dimensional space are to be covered
by hyperplanes. All these problems have kernels of polynomial size, where the
parameter is the minimum number of lines, curves, or hyperplanes needed. First
we give a non-parameterized algorithm for both problems in O*(2^n) (where the
O*(.) notation hides polynomial factors of n) time and polynomial space,
beating a previous exponential-space result. Combining this with incidence
bounds similar to the famous Szemeredi-Trotter bound, we present a Curve Cover
algorithm with running time O*((Ck/log k)^((d-1)k)), where C is some constant.
Our result improves the previous best times O*((k/1.35)^k) for Line Cover
(where d=2), O*(k^(dk)) for general Curve Cover, as well as a few other bounds
for covering points by parabolas or conics. We also present an algorithm for
Hyperplane Cover in R^3 with running time O*((Ck^2/log^(1/5) k)^k), improving
on the previous time of O*((k^2/1.3)^k).Comment: SoCG 201
Multi-learner based recursive supervised training
In this paper, we propose the Multi-Learner Based Recursive Supervised Training (MLRT) algorithm which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Instead of having a single learner to classify all datasets during each recursion, an appropriate learner is chosen from a set of three learners, based on the subset of data being trained, thereby avoiding the time overhead associated with the genetic algorithm learner utilized in previous approaches. In this way MLRT seeks to identify the inherent characteristics of the dataset, and utilize it to train the data accurately and efficiently. We observed that empirically, MLRT performs considerably well as compared to RPHP and other systems on benchmark data with 11% improvement in accuracy on the SPAM dataset and comparable performances on the VOWEL and the TWO-SPIRAL problems. In addition, for most datasets, the time taken by MLRT is considerably lower than the other systems with comparable accuracy. Two heuristic versions, MLRT-2 and MLRT-3 are also introduced to improve the efficiency in the system, and to make it more scalable for future updates. The performance in these versions is similar to the original MLRT system
Recommended from our members
A 10-point interpolatory recursive subdivision algorithm for the generation of parametric surfaces
In this paper, an interpolatory subdivision algorithm for surfaces over arbitrary triangulations is introduced and its properties over uniform triangulations studied. The Butterfly Scheme, which is introduced by Dyn, Gregory and Levin is a special case of this algorithm. In our analysis, the matrix approach is employed and the idea of "Cross Difference of Directional Divided Difference" analysis is presented. This method is a generalization of the technique used by Dyn, Gregory and Levin etc. to analyse univariate subdivision algorithms. It is proved that the algorithm produces smooth surfaces provided the shape parameters are kept within an appropriate range
Kinematically optimal hyper-redundant manipulator configurations
“Hyper-redundant” robots have a very large or infinite degree of kinematic redundancy. This paper develops new methods for determining “optimal” hyper-redundant manipulator configurations based on a continuum formulation of kinematics. This formulation uses a backbone curve model to capture the robot's essential macroscopic geometric features. The calculus of variations is used to develop differential equations, whose solution is the optimal backbone curve shape. We show that this approach is computationally efficient on a single processor, and generates solutions in O(1) time for an N degree-of-freedom manipulator when implemented in parallel on O(N) processors. For this reason, it is better suited to hyper-redundant robots than other redundancy resolution methods. Furthermore, this approach is useful for many hyper-redundant mechanical morphologies which are not handled by known methods
- …