3,011 research outputs found

    LASSO ISOtone for High Dimensional Additive Isotonic Regression

    Full text link
    Additive isotonic regression attempts to determine the relationship between a multi-dimensional observation variable and a response, under the constraint that the estimate is the additive sum of univariate component effects that are monotonically increasing. In this article, we present a new method for such regression called LASSO Isotone (LISO). LISO adapts ideas from sparse linear modelling to additive isotonic regression. Thus, it is viable in many situations with high dimensional predictor variables, where selection of significant versus insignificant variables are required. We suggest an algorithm involving a modification of the backfitting algorithm CPAV. We give a numerical convergence result, and finally examine some of its properties through simulations. We also suggest some possible extensions that improve performance, and allow calculation to be carried out when the direction of the monotonicity is unknown

    On the complexity of range searching among curves

    Full text link
    Modern tracking technology has made the collection of large numbers of densely sampled trajectories of moving objects widely available. We consider a fundamental problem encountered when analysing such data: Given nn polygonal curves SS in Rd\mathbb{R}^d, preprocess SS into a data structure that answers queries with a query curve qq and radius ρ\rho for the curves of SS that have \Frechet distance at most ρ\rho to qq. We initiate a comprehensive analysis of the space/query-time trade-off for this data structuring problem. Our lower bounds imply that any data structure in the pointer model model that achieves Q(n)+O(k)Q(n) + O(k) query time, where kk is the output size, has to use roughly Ω((n/Q(n))2)\Omega\left((n/Q(n))^2\right) space in the worst case, even if queries are mere points (for the discrete \Frechet distance) or line segments (for the continuous \Frechet distance). More importantly, we show that more complex queries and input curves lead to additional logarithmic factors in the lower bound. Roughly speaking, the number of logarithmic factors added is linear in the number of edges added to the query and input curve complexity. This means that the space/query time trade-off worsens by an exponential factor of input and query complexity. This behaviour addresses an open question in the range searching literature: whether it is possible to avoid the additional logarithmic factors in the space and query time of a multilevel partition tree. We answer this question negatively. On the positive side, we show we can build data structures for the \Frechet distance by using semialgebraic range searching. Our solution for the discrete \Frechet distance is in line with the lower bound, as the number of levels in the data structure is O(t)O(t), where tt denotes the maximal number of vertices of a curve. For the continuous \Frechet distance, the number of levels increases to O(t2)O(t^2)

    The Theoretical Regularity Properties of the Normalized Quadratic Consumer Demand Model

    Get PDF
    We conduct a Monte Carlo study of the global regularity properties of the Normalized Quadratic model. We particularly investigate monotonicity violations, as well as the performance of methods of locally and globally imposing curvature. We find that monotonicity violations are especially likely to occur, when elasticities of substitution are greater than unity. We also find that imposing curvature locally produces difficulty in the estimation, smaller regular regions, and the poor elasticity estimates in many cases considered in the paper. Imposition of curvature alone does not assure regularity, and imposing local curvature alone can have very adverse consequences.

    Methods for the treatment of uncertainty in dynamical systems: Application to diabetes

    Full text link
    [EN] Patients suffering from Type 1 Diabetes are not able to secrete insulin, thus, they have to get it administered externally. Current research is focused on developing an artificial pancreas, a control system that automatically administers insulin according to patient's needs. The work presented here aims to improve the efficiency and safety of control algorithms for artificial pancreas. Glucose-insulin models try to mimic the administration of external insulin, the absorption of carbohydrates, and the influence of both of them in blood glucose concentration. However, these processes are infinitely complex and they are characterized by their high variability. The mathematical models used are often a simplified version which does not include all the process variability and, therefore, they do not always match reality. This deficiency on the models can be addressed by considering uncertainty on their parameters and initial conditions. In this way, the exact values are unknown but they can be bounded by intervals that comprehend all the variability of the considered process. When the value of the parameters and initial conditions is known, there is usually just one possible behaviour. However, if they are bounded by intervals, a set of possible solutions exists. In this case, it is interesting to compute a solution envelope that guarantees the inclusion of all the possible behaviours. A common technique to compute this envelope is the monotonicity analysis of the system. Nevertheless, some overestimation is produced if the system is not fully monotone. In this thesis, several methods and approaches have been developed to reduce, or even eliminate, the overestimation in the computation of solution envelopes, while satisfying the inclusion guarantee. Another problem found during the use of an artificial pancreas is that only the subcutaneous glucose concentration can be measured in real time, with some noise in the measurements. The rest of the system states are unknown, but they could be estimated from this set of noisy measurements by state observers, like Kalman filters. A detailed example is shown at the end of this thesis, where an Extended Kalman Filter is used to estimate in real time insulin concentration based on the food ingested and in periodical measurements of subcutaneous glucose.[ES] Los pacientes que sufren de diabetes tipo 1 no son capaces de secretar insulina, por lo que tienen que administrársela externamente. La investigación actual se centra en el desarrollo de un páncreas artificial, un sistema de control que administre automáticamente la insulina en función de las necesidades del paciente. El trabajo que aquí se presenta tiene como objetivo mejorar la eficiencia y la seguridad de los algoritmos de control para el páncreas artificial. Los modelos de glucosa-insulina tratan de emular la administración externa de la insulina, la absorción de carbohidratos y la influencia de ambos en la concentración de glucosa en sangre. El problema es que estos procesos son infinitamente complejos y se caracterizan por su alta variabilidad. Los modelos matemáticos utilizados suelen ser una versión simplificada que no incluye toda la variabilidad del proceso y, por lo tanto, no coinciden con la realidad. Esta deficiencia de los modelos puede subsanarse considerando inciertos sus parámetros y las condiciones iniciales, de manera que se desconoce su valor exacto pero sí podemos englobarlos en ciertos intervalos que comprendan toda la variabilidad del proceso considerado. Cuando los valores de los parámetros y de las condiciones iniciales son conocidos, existe, por lo general, un único comportamiento posible. Sin embargo, si están delimitados por intervalos se obtiene un conjunto de posibles soluciones. En este caso, interesa obtener una envoltura de las soluciones que garantice la inclusión de todos los comportamientos posibles. Una técnica habitual que facilita el cómputo de esta envoltura es el análisis de la monotonicidad del sistema. Sin embargo, si el sistema no es totalmente monótono la envoltura obtenida estará sobrestimada. En esta tesis se han desarrollado varios métodos para reducir, o incluso eliminar, la sobrestimación en el cálculo de envolturas, al tiempo que se satisface la garantía de inclusión. Otro inconveniente con el que nos encontramos durante el uso de un páncreas artificial es que solo es posible medir en tiempo real, con cierto ruido en la medida, la glucosa subcutánea. El resto de los estados del sistema son desconocidos, pero podrían ser estimados a partir de este conjunto limitado de mediciones con ruido utilizando observadores de estado, como el Filtro de Kalman. Un ejemplo detallado se muestra al final de la tesis, donde se estima en tiempo real la concentración de insulina en plasma en función de la comida ingerida y de mediciones periódicas de la glucosa subcutánea con ayuda de un Filtro de Kalman Extendido.[CA] Els pacients que pateixen de diabetis tipus 1 no són capaços de secretar insulina, motiu pel qual han d'administrar-se-la externament. La investigació actual es centra en el desenvolupament d'un pàncrees artificial, un sistema de control que administre automàticament la insulina en funció de les necessitats del pacient. El treball que ací es presenta té com a objectiu millorar l'eficiència i la seguretat dels algorismes de control per al pàncrees artificial. Els models de glucosa-insulina tracten d'emular l'administració externa de la insulina, l'absorció de carbohidrats i la influència d'ambdós factors en la concentració de glucosa en sang. El problema és que estos processos són infinitament complexos i es caracteritzen per la seua alta variabilitat. Els models matemàtics emprats solen ser una versió simplificada que no inclou tota la variabilitat del procés i, per tant, no coincideixen amb la realitat. Esta deficiència dels models pot esmenar-se considerant incerts els seus paràmetres i les condicions inicials, de manera que es desconeix el seu valor exacte però sí podem englobar-los en certs intervals que comprenguen tota la variabilitat del procés considerat. Quan els valors dels paràmetres i de les condicions inicials són coneguts, existeix, en general, un únic comportament possible. No obstant, si estan delimitats per intervals s'obté un conjunt de possibles solucions. En este cas, interessa obtindre un embolcall de les solucions que assegure la inclusió de tots els comportaments possibles. Una tècnica habitual que facilita el còmput d'este embolcall és l'anàlisi de la monotonicitat del sistema. No obstant, si el sistema no és totalment monòton l'embolcall obtingut estarà sobreestimat. En esta tesi s'han desenvolupat diversos mètodes per a reduir, o fins i tot eliminar, la sobreestimació en el càlcul dels embolcalls, al temps que se satisfà la garantia d'inclusió. Altre inconvenient amb què ens trobem durant l'ús d'un pàncrees artificial és que només és possible mesurar en temps real, amb cert soroll en la mesura, la glucosa subcutània. La resta dels estats del sistema són desconeguts, però podrien ser estimats a partir d'este conjunt limitat de mesures amb soroll utilitzant observadors d'estat, com el Filtre de Kalman. Un exemple detallat es mostra al final de la tesi, on s'estima en temps real la concentració d'insulina en plasma en funció del menjar ingerit i de les mesures periòdiques de la glucosa subcutània amb ajuda d'un Filtre de Kalman Estés.Pereda Sebastián, DD. (2015). Methods for the treatment of uncertainty in dynamical systems: Application to diabetes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/54121TESI

    Efficient Solving of Quantified Inequality Constraints over the Real Numbers

    Full text link
    Let a quantified inequality constraint over the reals be a formula in the first-order predicate language over the structure of the real numbers, where the allowed predicate symbols are \leq and <<. Solving such constraints is an undecidable problem when allowing function symbols such sin\sin or cos\cos. In the paper we give an algorithm that terminates with a solution for all, except for very special, pathological inputs. We ensure the practical efficiency of this algorithm by employing constraint programming techniques

    Parameter Synthesis in Markov Models: A Gentle Survey

    Full text link
    This paper surveys the analysis of parametric Markov models whose transitions are labelled with functions over a finite set of parameters. These models are symbolic representations of uncountable many concrete probabilistic models, each obtained by instantiating the parameters. We consider various analysis problems for a given logical specification φ\varphi: do all parameter instantiations within a given region of parameter values satisfy φ\varphi?, which instantiations satisfy φ\varphi and which ones do not?, and how can all such instantiations be characterised, either exactly or approximately? We address theoretical complexity results and describe the main ideas underlying state-of-the-art algorithms that established an impressive leap over the last decade enabling the fully automated analysis of models with millions of states and thousands of parameters
    corecore