1,801 research outputs found

    Dynamical Models of Extreme Rolling of Vessels in Head Waves

    Get PDF
    Rolling of a ship is a swinging motion around its length axis. In particular vessels transporting containers may show large amplitude roll when sailing in seas with large head waves. The dynamics of the ship is such that rolling interacts with heave being the motion of the mass point of the ship in vertical direction. Due to the shape of the hull of the vessel its heave is influenced considerably by the phase of the wave as it passes the ship. The interaction of heave and roll can be modeled by a mass-spring-pendulum system. The effect of waves is then included in the system by a periodic forcing term. In first instance the damping of the spring can be taken infinitely large making the system a pendulum with an in vertical direction periodically moving suspension. For a small angular deflection the roll motion is then described by the Mathieu equation containing a periodic forcing. If the period of the solution of the equation without forcing is about twice the period of the forcing then the oscillation gets unstable and the amplitude starts to grow. After describing this model we turn to situation that the ship is not anymore statically fixed at the fluctuating water level. It may move up and down showing a motion modeled by a damped spring. One step further we also allow for pitch, a swinging motion around a horizontal axis perpendicular to the ship. It is recommended to investigate the way waves may directly drive this mode and to determine the amount of energy that flows along this path towards the roll mode. Since at sea waves are a superposition of waves with different wavelengths, we also pay attention to the properties of such a type of forcing containing stochastic elements. It is recommended that as a measure for the occurrence of large deflections of the roll angle one should take the expected time for which a given large deflection may occur instead of the mean amplitude of the deflection

    The effects of halo alignment and shape on the clustering of galaxies

    Full text link
    We investigate the effects of halo shape and its alignment with larger scale structure on the galaxy correlation function. We base our analysis on the galaxy formation models of Guo et al., run on the Millennium Simulations. We quantify the importance of these effects by randomizing the angular positions of satellite galaxies within haloes, either coherently or individually, while keeping the distance to their respective central galaxies fixed. We find that the effect of disrupting the alignment with larger scale structure is a ~2 per cent decrease in the galaxy correlation function around r=1.8 Mpc/h. We find that sphericalizing the ellipsoidal distributions of galaxies within haloes decreases the correlation function by up to 20 per cent for r<1 Mpc/h and increases it slightly at somewhat larger radii. Similar results apply to power spectra and redshift-space correlation functions. Models based on the Halo Occupation Distribution, which place galaxies spherically within haloes according to a mean radial profile, will therefore significantly underestimate the clustering on sub-Mpc scales. In addition, we find that halo assembly bias, in particular the dependence of clustering on halo shape, propagates to the clustering of galaxies. We predict that this aspect of assembly bias should be observable through the use of extensive group catalogues.Comment: 8 pages, 6 figures. Accepted for publication in MNRAS. Minor changes relative to v1. Note: this is an revised and considerably extended resubmission of http://arxiv.org/abs/1110.4888; please refer to the current version rather than the old on

    Map Point Selection for Visual SLAM

    Full text link
    Simultaneous localisation and mapping (SLAM) play a vital role in autonomous robotics. Robotic platforms are often resource-constrained, and this limitation motivates resource-efficient SLAM implementations. While sparse visual SLAM algorithms offer good accuracy for modest hardware requirements, even these more scalable sparse approaches face limitations when applied to large-scale and long-term scenarios. A contributing factor is that the point clouds resulting from SLAM are inefficient to use and contain significant redundancy. This paper proposes the use of subset selection algorithms to reduce the map produced by sparse visual SLAM algorithms. Information-theoretic techniques have been applied to simpler related problems before, but they do not scale if applied to the full visual SLAM problem. This paper proposes a number of novel information\hyp{}theoretic utility functions for map point selection and optimises these functions using greedy algorithms. The reduced maps are evaluated using practical data alongside an existing visual SLAM implementation (ORB-SLAM 2). Approximate selection techniques proposed in this paper achieve trajectory accuracy comparable to an offline baseline while being suitable for online use. These techniques enable the practical reduction of maps for visual SLAM with competitive trajectory accuracy. Results also demonstrate that SLAM front-end performance can significantly impact the performance of map point selection. This shows the importance of testing map point selection with a front-end implementation. To exploit this, this paper proposes an approach that includes a model of the front-end in the utility function when additional information is available. This approach outperforms alternatives on applicable datasets and highlights future research directions

    LABCAT: Locally adaptive Bayesian optimization using principal component-aligned trust regions

    Full text link
    Bayesian optimization (BO) is a popular method for optimizing expensive black-box functions. BO has several well-documented shortcomings, including computational slowdown with longer optimization runs, poor suitability for non-stationary or ill-conditioned objective functions, and poor convergence characteristics. Several algorithms have been proposed that incorporate local strategies, such as trust regions, into BO to mitigate these limitations; however, none address all of them satisfactorily. To address these shortcomings, we propose the LABCAT algorithm, which extends trust-region-based BO by adding principal-component-aligned rotation and an adaptive rescaling strategy based on the length-scales of a local Gaussian process surrogate model with automatic relevance determination. Through extensive numerical experiments using a set of synthetic test functions and the well-known COCO benchmarking software, we show that the LABCAT algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms

    Degenerate Gaussian factors for probabilistic inference

    Full text link
    In this paper, we propose a parametrised factor that enables inference on Gaussian networks where linear dependencies exist among the random variables. Our factor representation is effectively a generalisation of traditional Gaussian parametrisations where the positive-definite constraint of the covariance matrix has been relaxed. For this purpose, we derive various statistical operations and results (such as marginalisation, multiplication and affine transformations of random variables) that extend the capabilities of Gaussian factors to these degenerate settings. By using this principled factor definition, degeneracies can be accommodated accurately and automatically at little additional computational cost. As illustration, we apply our methodology to a representative example involving recursive state estimation of cooperative mobile robots.Comment: Accepted by International Journal of Approximate Reasoning on 17 January 202
    corecore