3,618 research outputs found

    Quasirandom Load Balancing

    Full text link
    We propose a simple distributed algorithm for balancing indivisible tokens on graphs. The algorithm is completely deterministic, though it tries to imitate (and enhance) a random algorithm by keeping the accumulated rounding errors as small as possible. Our new algorithm surprisingly closely approximates the idealized process (where the tokens are divisible) on important network topologies. On d-dimensional torus graphs with n nodes it deviates from the idealized process only by an additive constant. In contrast to that, the randomized rounding approach of Friedrich and Sauerwald (2009) can deviate up to Omega(polylog(n)) and the deterministic algorithm of Rabani, Sinclair and Wanka (1998) has a deviation of Omega(n^{1/d}). This makes our quasirandom algorithm the first known algorithm for this setting which is optimal both in time and achieved smoothness. We further show that also on the hypercube our algorithm has a smaller deviation from the idealized process than the previous algorithms.Comment: 25 page

    An Exponential Lower Bound for the Runtime of the cGA on Jump Functions

    Full text link
    In the first runtime analysis of an estimation-of-distribution algorithm (EDA) on the multi-modal jump function class, Hasen\"ohrl and Sutton (GECCO 2018) proved that the runtime of the compact genetic algorithm with suitable parameter choice on jump functions with high probability is at most polynomial (in the dimension) if the jump size is at most logarithmic (in the dimension), and is at most exponential in the jump size if the jump size is super-logarithmic. The exponential runtime guarantee was achieved with a hypothetical population size that is also exponential in the jump size. Consequently, this setting cannot lead to a better runtime. In this work, we show that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size. This result might be the first non-trivial exponential lower bound for EDAs that holds for arbitrary parameter settings.Comment: To appear in the Proceedings of FOGA 2019. arXiv admin note: text overlap with arXiv:1903.1098

    Testing probability distributions underlying aggregated data

    Full text link
    In this paper, we analyze and study a hybrid model for testing and learning probability distributions. Here, in addition to samples, the testing algorithm is provided with one of two different types of oracles to the unknown distribution DD over [n][n]. More precisely, we define both the dual and cumulative dual access models, in which the algorithm AA can both sample from DD and respectively, for any i∈[n]i\in[n], - query the probability mass D(i)D(i) (query access); or - get the total mass of {1,…,i}\{1,\dots,i\}, i.e. ∑j=1iD(j)\sum_{j=1}^i D(j) (cumulative access) These two models, by generalizing the previously studied sampling and query oracle models, allow us to bypass the strong lower bounds established for a number of problems in these settings, while capturing several interesting aspects of these problems -- and providing new insight on the limitations of the models. Finally, we show that while the testing algorithms can be in most cases strictly more efficient, some tasks remain hard even with this additional power

    Visual and Contextual Modeling for the Detection of Repeated Mild Traumatic Brain Injury.

    Get PDF
    Currently, there is a lack of computational methods for the evaluation of mild traumatic brain injury (mTBI) from magnetic resonance imaging (MRI). Further, the development of automated analyses has been hindered by the subtle nature of mTBI abnormalities, which appear as low contrast MR regions. This paper proposes an approach that is able to detect mTBI lesions by combining both the high-level context and low-level visual information. The contextual model estimates the progression of the disease using subject information, such as the time since injury and the knowledge about the location of mTBI. The visual model utilizes texture features in MRI along with a probabilistic support vector machine to maximize the discrimination in unimodal MR images. These two models are fused to obtain a final estimate of the locations of the mTBI lesion. The models are tested using a novel rodent model of repeated mTBI dataset. The experimental results demonstrate that the fusion of both contextual and visual textural features outperforms other state-of-the-art approaches. Clinically, our approach has the potential to benefit both clinicians by speeding diagnosis and patients by improving clinical care

    Moment Problems with Applications to Value-At-Risk and Portfolio Management

    Get PDF
    Moment Problems with Applications to Value-At-Risk and Portfolio Management By Ruilin Tian May 2008 Committee Chair: Dr. Samuel H. Cox Major Department: Risk Management and Insurance My dissertation provides new applications of moment theory and optimization to financial and insurance risk management. In the investment and managerial areas, one often needs to determine some measure of risk, especially the risk of extreme events. However, complete information of the underlying outcomes is usually unavailable; instead one has access to partial information such as the mean, variance, mode, or range. In Chapters 2 and 3, we find the semiparametric upper and lower bounds for the value-at-risk (VaR) with incomplete information, that is, moments of the underlying distribution. When a single variable is concerned, bounds on VaR are computed to obtain a 100% confidence interval. When the sample financial data have a global maximum, we show that unimodal assumption tightens the optimal bounds. Next we further analyze a function of two correlated random variables. Specifically, we find bounds on the probability of two joint extreme events. When three or more variables are involved, the multivariate problem can sometimes be converted to a single variable problem. In all cases, we use the physical measure rather than the commonly used equivalent pricing probability measure. In addition to solving these problems using the traditional approach based on the geometry of a moment problem, a more efficient method is proposed to solve a general class of moment bounds via semidefinite programming. In the last part of the thesis, we apply optimization techniques to improve financial portfolio risk management. Instead of considering VaR, we work with a coherent risk measure, the conditional VaR (CVaR). As an extension of Krokhmal et al. (2002), we impose CVaR-related functions to the portfolio selection problem. The CVaR approach sets a β-level CVaR as the objective function and maximizes the worst case on the tail of the distribution. The CVaR-like constraints approach adds a set of CVaR-like constraints to the traditional Markowitz problem, reshaping the portfolio distribution. Both methods greatly increase the skewness of portfolios, although the CVaR approach may lose control of the variance. This capability of increasing skewness is very attractive to the investors who may prefer higher probability of obtaining higher returns. We compare the CVaR-related approaches to some other popular portfolio optimization methods. Our numerical analysis provides empirical support for the superiority of the CVaR-like constraints approach in terms of portfolio efficiency

    'Extreme support for UIP' revisited : how comes the dogs don't bark ?.

    Get PDF
    Abstract: Conjecturing that, in testing UIP, transaction costs may have obscured the relation between expected exchange-rate changes and forward premia, Huisman et al. (1998) focus on days with unusually large cross-sectional variances in forward premia (reflecting, assumedly, episodes with pronounced expectations). They find encouragingly high regression coefficients for those special days---'extreme' support, in short.We show that, for extreme forward premia to be primarily due to a clear signal rather than loud noise, the signal needs to be thicker-tailed than the noise. Transaction-cost-induced noise seems to have promising properties: percentage deviations from the perfect-markets equilibrium should be (i) bounded (that is, they have no tails and, therefore, cannot dominate the extreme forward premia), (ii) wide (that is, they may generate betas below 1/2) and (iii) U-shaped in distribution, a feature that turns out to make an 'extreme' sample quite effective. We derive theoretical and numerical results in the direction of what Huisman (et al.) observe.Optimal;
    • …
    corecore