1,478 research outputs found
Percolation-like Scaling Exponents for Minimal Paths and Trees in the Stochastic Mean Field Model
In the mean field (or random link) model there are points and inter-point
distances are independent random variables. For and in the
limit, let (maximum number of steps
in a path whose average step-length is ). The function
is analogous to the percolation function in percolation theory:
there is a critical value at which becomes
non-zero, and (presumably) a scaling exponent in the sense
. Recently developed probabilistic
methodology (in some sense a rephrasing of the cavity method of Mezard-Parisi)
provides a simple albeit non-rigorous way of writing down such functions in
terms of solutions of fixed-point equations for probability distributions.
Solving numerically gives convincing evidence that . A parallel
study with trees instead of paths gives scaling exponent . The new
exponents coincide with those found in a different context (comparing optimal
and near-optimal solutions of mean-field TSP and MST) and reinforce the
suggestion that these scaling exponents determine universality classes for
optimization problems on random points.Comment: 19 page
On the relationship between standard intersection cuts, lift-and-project cuts, and generalized intersection cuts
We examine the connections between the classes of cuts in the title. We show that lift-and-project (L&P) cuts from a given disjunction are equivalent to generalized intersection cuts from the family of polyhedra obtained by taking positive combinations of the complements of the inequalities of each term of the disjunction. While L&P cuts from split disjunctions are known to be equivalent to standard intersection cuts (SICs) from the strip obtained by complementing the terms of the split, we show that L&P cuts from more general disjunctions may not be equivalent to any SIC. In particular, we give easily verifiable necessary and sufficient conditions for a L&P cut from a given disjunction D to be equivalent to a SIC from the polyhedral counterpart of D. Irregular L&P cuts, i.e. those that violate these conditions, have interesting properties. For instance, unlike the regular ones, they may cut off part of the corner polyhedron associated with the LP solution from which they are derived. Furthermore, they are not exceptional: their frequency exceeds that of regular cuts. A numerical example illustrates some of the above properties. © 2016 Springer-Verlag Berlin Heidelberg and Mathematical Optimization Societ
Iris Codes Classification Using Discriminant and Witness Directions
The main topic discussed in this paper is how to use intelligence for
biometric decision defuzzification. A neural training model is proposed and
tested here as a possible solution for dealing with natural fuzzification that
appears between the intra- and inter-class distribution of scores computed
during iris recognition tests. It is shown here that the use of proposed neural
network support leads to an improvement in the artificial perception of the
separation between the intra- and inter-class score distributions by moving
them away from each other.Comment: 6 pages, 5 figures, Proc. 5th IEEE Int. Symp. on Computational
Intelligence and Intelligent Informatics (Floriana, Malta, September 15-17),
ISBN: 978-1-4577-1861-8 (electronic), 978-1-4577-1860-1 (print
A decision-support system for the analysis of clinical practice patterns
pre-printSeveral studies documented substantial variation in medical practice patterns, but physicians often do not have adequate information on the cumulative clinical and financial effects of their decisions. The purpose of developing an expert system for the analysis of clinical practice patterns was to assist providers in analyzing and improving the process and outcome of patient care. The developed QFES (Quality Feedback Expert System) helps users in the definition and evaluation of measurable quality improvement objectives. Based on objectives and actual clinical data, several measures can be calculated (utilization of procedures, annualized cost effect of using a particular procedure, and expected utilization based on peer-comparison and case-mix adjustment). The quality management rules help to detect important discrepancies among members of the selected provider group and compare performance with objectives. The system incorporates a variety of data and knowledge bases: (i) clinical data on actual practice patterns, (ii) frames of quality parameters derived from clinical practice guidelines, and (iii) rules of quality management for data analysis. An analysis of practice patterns of 12 family physicians in the management of urinary tract infections illustrates the use of the system
The columbia registry of controlled clinical computer trials
pre-printNumerous reports on randomized controlled clinical trials of comnputer-based interventions have been published. These trials provide useful evaluations of the impact of information technology on patient care. Unfortunately, several obstacles make access to the trial reports difficult. Barriers include the large variety of publications in which reports may appear, non-standard descriptors, and incomplete indexing. Some analyzers indicate inadequate testing of computer methods. The purpose of establishing a registry of randomized controlled clinical computer trials was to assist the identification of computer services with demonstrated ability to improve the process or outcome of patient care. A report collection, selection, information extraction, and registration method was developed and implemented. One hundred and six reports on computer trials have been collected. A large variety of computer-assisted interventions have been tested in the registered trials (40% reminder, 15% feedback, 14% dose planning, 14% patient education, 12% medical record). 76% of the registered reports were published in the United States and most of the remainder in various European countries. In reporting computer trial results, 77% of the authors did not use both tile "computer" and "trial" keywords in the title or abstract of their papers. We conclude that a major obstacle to adequate computer technology assessment is inadequate access to the published results
Big Data and Analysis of Data Transfers for International Research Networks Using NetSage
Modern science is increasingly data-driven and collaborative in nature. Many scientific disciplines, including genomics, high-energy physics, astronomy, and atmospheric science, produce petabytes of data that must be shared with collaborators all over the world. The National Science Foundation-supported International Research Network Connection (IRNC) links have been essential to enabling this collaboration, but as data sharing has increased, so has the amount of information being collected to understand network performance. New capabilities to measure and analyze the performance of international wide-area networks are essential to ensure end-users are able to take full advantage of such infrastructure for their big data applications. NetSage is a project to develop a unified, open, privacy-aware network measurement, and visualization service to address the needs of monitoring today's high-speed international research networks. NetSage collects data on both backbone links and exchange points, which can be as much as 1Tb per month. This puts a significant strain on hardware, not only in terms storage needs to hold multi-year historical data, but also in terms of processor and memory needs to analyze the data to understand network behaviors. This paper addresses the basic NetSage architecture, its current data collection and archiving approach, and details the constraints of dealing with this big data problem of handling vast amounts of monitoring data, while providing useful, extensible visualization to end users
- …