135 research outputs found

    Consistencia en la desagregación de la población. El problema del ruido y el age heaping

    Get PDF
    La población desagregada en edad simple es una herramienta básica para las oficinas estadísticas, pues es usada, por ejemplo, como denominador en el cálculo de indicadores. Sin embargo, las cifras de población para algunos ámbitos territoriales solo están disponibles en forma agrupada: normalmente, esta distribución de la población se publica en grupos quinquenales de edad más un intervalo superior abierto donde se acumula la población de mayor edad. Un problema importante al que se suelen enfrentar las diferentes oficinas estadísticas, tanto de ámbito estatal como autonómico, es la desagregación de los datos de población en grupos de edad simple, permitiendo, cuando sea requerido, incluir conocimiento demográfico y mantener la consistencia de los resultados obtenidos con las agregaciones de población de los ámbitos territoriales superiores al desagregado o, incluso, la consistencia en la evolución de ésta a lo largo del tiempo. En este trabajo se consideran técnicas de optimización para dar respuesta a este problema real, aunque muy poco estudiado en la literatura. Los procedimientos propuestos permiten también tratar un problema habitual en este tipo de fuentes estadísticas, como es la presencia de ruido de distinta naturaleza en los datos disponibles, y, en particular, el fenómeno conocido como age heaping.Official Statistics call for data by individual age, since a significant number of statistical operations, such as the calculation of demographic indicators, require the use of degrouped population figures. However, in some countries or regions population data are only available in a grouped form, usually as quinquennial age groups plus a large open-ended interval for elderly people. A challenging problem faced by Official Statistics institutes is how to degroup data by individual age, allowing one, if needed, to include demographic knowledge or to be consistent with the heaped information. In this paper Mathematical Optimization models are proposed to address this important, yet seldom studied problem. These models also consider a frequent issue in statistical sources: the presence of noise and errors, and, in particular, the phenomenon known as age heaping.Ministerio de Economía y CompetitividadJunta de Andalucí

    Continuous location problems and Big Triangle Small Triangle: constructing better bounds

    Get PDF
    The Big Triangle Small Triangle method has shown to be a powerful global optimization procedure to address continuous location problems. In the paper published in J. Global Optim. (37:305–319, 2007), Drezner proposes a rather general and effective approach for constructing the bounds needed. Such bounds are obtained by using the fact that the objective functions in continuous location models can usually be expressed as a difference of convex functions. In this note we show that, exploiting further the rich structure of such objective functions, alternative bounds can be derived, yielding a significant improvement in computing times, as reported in our numerical experience.Ministerio de Educación y CienciaJunta de Andalucí

    Solving the median problem with continuous demand on a network

    Get PDF
    Where to locate one or several facilities on a network so as to minimize the expected users-closest facility transportation cost is a problem well studied in the OR literature under the name of median problem. In the median problem users are usually identified with nodes of the network. In many situations, however, such assumption is unrealistic, since users should be better considered to be distributed also along the edges of the transportation network. In this paper we address the median problem with demand distributed along edges and nodes. This leads to a globaloptimization problem, which can be solved to optimality by means of a branch-and-bound with DC bounds. Our computational experience shows that the problem is solved in short time even for large instances.Ministerio de Educación, Cultura y DeporteJunta de AndalucíaEuropean Regional Development Fun

    Locating a competitive facility in the plane with a robustness criterion

    Get PDF
    A new continuous location model is presented and embedded in the literature on robustness in facility location. The multimodality of the model is investigated, and a branch and bound method based on dc optimization is described. Numerical experience is reported, showing that the developed method allows one to solve in a few seconds problems with thousands of demand points.Ministerio de Ciencia e InnovaciónJunta de AndalucíaEuropean Regional Development Fun

    On minimax-regret Huff location models

    Get PDF
    We address the following single-facility location problem: a firm is entering into a market by locating one facility in a region of the plane. The demand captured from each user by the facility will be proportional to the users buying power and inversely proportional to a function of the user-facility distance. Uncertainty exists on the buying power (weight) of the users. This is modeled by assuming that a set of scenarios exists, each scenario corresponding to a weight realization. The objective is to locate the facility following the Savage criterion, i.e., the minimax-regret location is sought. The problem is formulated as a global optimization problem with objective written as difference of two convex monotonic functions. The numerical results obtained show that a branch and bound using this new method for obtaining bounds clearly outperforms benchmark procedures.Ministerio de Educación y CienciaJunta de Andalucí

    Maximal covering location problems on networks with regional demand

    Get PDF
    Covering problems are well studied in the Operations Research literature under the assumption that both the set of users and the set of potential facilities are finite. In this paper we address the following variant, which leads to a Mixed Integer Nonlinear Program (MINLP): locations of p facilities are sought along the edges of a network so that the expected demand covered is maximized, where demand is continuously distributed along the edges. This MINLP has a combinatorial part (which edges of the network are chosen to contain facilities) and a continuous global optimization part (once the edges are chosen, which are the optimal locations within such edges). A branch and bound algorithm is proposed, which exploits the structure of the problem: specialized data structures are introduced to successfully cope with the combinatorial part, inserted in a geometric branch and bound. Computational results are presented, showing the appropriateness of our procedure to solve covering problems for small (but nontrivial) values of p.Unión EuropeaMinisterio de Ciencia e InnovaciónJunta de Andalucí

    Inferring efficient weights from pairwise comparison matrices

    Get PDF
    Several multi-criteria-decision-making methodologies assume the existence of weights associated with the different criteria, reflecting their relative importance.One of the most popular ways to infer such weights is the analytic hierarchy process, which constructs first a matrix of pairwise comparisons, from which weights are derived following one out of many existing procedures, such as the eigenvector method or the least (logarithmic) squares. Since different procedures yield different results (weights) we pose the problem of describing the set of weights obtained by “sensible” methods: those which are efficient for the (vector-) optimization problem of simultaneous minimization of discrepancies. A characterization of the set of efficient solutions is given, which enables us to assert that the least-logarithmic-squares solution is always efficient, whereas the (widely used) eigenvector solution is not, in some cases, efficient, thus its use in practice may be questionable.Ministerio de Ciencia y TecnologíaFondo Europeo de Desarrollo Regiona

    Exercises using a touchscreen tablet application improved functional ability more than an exercise program prescribed on paper in people after surgical carpal tunnel release: a randomised trial

    Get PDF
    Question: In people who have undergone surgical carpal tunnel release, do sensorimotor-based exercises performed on the touchscreen of a tablet device improve outcomes more than a conventional home exercise program prescribed on paper? Design: Randomised, parallel-group trial with concealed allocation, assessor blinding, and intention-to-treat analysis. Participants: Fifty participants within 10 days of surgical carpal tunnel release. Intervention: Each participant was prescribed a 4-week home exercise program. Participants in the experimental group received the ReHand tablet application, which administered and monitored exercises via the touchscreen. The control group was prescribed a home exercise program on paper, as is usual practice in the public hospital system. Outcome measures: The primary outcome was functional ability of the hand, reported using the shortened form of the Disabilities of the Arm, Shoulder and Hand (QuickDASH) questionnaire. Secondary outcomes were grip strength, pain intensity measured on a 10-cm visual analogue scale, and dexterity measured with the Nine-Hole Peg Test. Outcomes were measured by a blinded assessor at baseline and at the end of the 4-week intervention period. Results: At Week 4, functional ability improved significantly more in the experimental group than the control group (MD –21, 95% CI –33 to –9) on the QuickDASH score (0 to 100). Although the mean estimates of effect on the secondary outcome also all favoured the experimental group, none reached statistical significance: grip strength (MD 5.6 kg, 95% CI –0.5 to 11.7), pain (MD –1.4 cm, 95% CI –2.9 to 0.1), and dexterity (MD –1.3 seconds, 95% CI –3.7 to 1.1). Conclusion: Use of the ReHand tablet application for early rehabilitation after carpal tunnel release is more effective in the recovery of functional ability than a conventional home exercise program. It remains unclear whether there are any benefits in grip strength, pain or dexterity. Trial registration: ACTRN12618001887268

    Cost-sensitive feature selection for support vector machines

    Get PDF
    Feature Selection (FS) is a crucial procedure in Data Science tasks such as Classification, since it identifies the relevant variables, making thus the classification procedures more interpretable and more effective by reducing noise and data overfit. The relevance of features in a classification procedure is linked to the fact that misclassifications costs are frequently asymmetric, since false positive and false negative cases may have very different consequences. However, off-the-shelf FS procedures seldom take into account such cost-sensitivity of errors. In this paper we propose a mathematical-optimization-based FS procedure embedded in one of the most popular classification procedures, namely, Support Vector Machines (SVM), accommodating asymmetric misclassification costs. The key idea is to replace the traditional margin maximization by minimizing the number of features selected, but imposing upper bounds on the false positive and negative rates. The problem is written as an integer linear problem plus a quadratic convex problem for SVM with both linear and radial kernels. The reported numerical experience demonstrates the usefulness of the proposed FS procedure. Indeed, our results on benchmark data sets show that a substantial decrease of the number of features is obtained, whilst the desired trade-off between false positive and false negative rates is achieved

    p-facility Huff location problem on networks

    Get PDF
    The p-facility Huff location problem aims at locating facilities on a competitive environment so as to maximize the market share. While it has been deeply studied in the field of continuous location, in this paper we study the p-facility Huff location problem on networks formulated as a Mixed Integer Nonlinear Programming problem that can be solved by a branch-and-bound algorithm. We propose two approaches for the initialization and division of subproblems, the first one based on the straightforward idea of enumerating every possible combination of p edges of the network as possible locations, and the second one defining sophisticated data structures that exploit the structure of the combinatorial and continuous part of the problem. Bounding rules are designed using DC (difference of convex) and Interval Analysis tools. In our computational study we compare the two approaches on a battery of 21 networks and show that both of them can handle problems for p ≤ 4 in reasonable computing time.Ministerio de Economía y CompetitividadJunta de AndalucíaHungarian National Research, Development and Innovation OfficeInformation and Communication Technologies COS
    corecore