10,988 research outputs found
Design study of a novel regenerative pump using experimental and numerical techniques
This paper presents a numerical and experimental analysis of a new regenerative pump design. The complex flow-field within regenerative pumps represents a significant challenge to previous published mathematical models. The new pump design incorporates a new axial inlet and outlet port. The experimental and numerical results demonstrate that it is not only possible to resolve the flowfield for this pump type but also demonstrates this pump as a viable alternative to other kinetic rotodynamic machines. The use of the latest rapid manufacturing techniques have enabled the production of the complex geometry of the axial ports required for the new configuration
Bayesian Optimisation for Safe Navigation under Localisation Uncertainty
In outdoor environments, mobile robots are required to navigate through
terrain with varying characteristics, some of which might significantly affect
the integrity of the platform. Ideally, the robot should be able to identify
areas that are safe for navigation based on its own percepts about the
environment while avoiding damage to itself. Bayesian optimisation (BO) has
been successfully applied to the task of learning a model of terrain
traversability while guiding the robot through more traversable areas. An
issue, however, is that localisation uncertainty can end up guiding the robot
to unsafe areas and distort the model being learnt. In this paper, we address
this problem and present a novel method that allows BO to consider localisation
uncertainty by applying a Gaussian process model for uncertain inputs as a
prior. We evaluate the proposed method in simulation and in experiments with a
real robot navigating over rough terrain and compare it against standard BO
methods.Comment: To appear in the proceedings of the 18th International Symposium on
Robotics Research (ISRR 2017
Suitability of BIM for enhancing value on PPP projects for the benefit of the public sector
Collaborative integrated working and stakeholderâs interest have been among key drivers that underpin and encourage the use of Building Information Modelling (BIM) within the AEC industry. BIM is becoming a major means to deliver projects with better improved product, and reduced risk within the construction industry. Furthermore, using BIM in areas like buildability, quality assurance, cost and scheduling can be justified through BIM-nD modelling application. What is not so obvious is how the utilisation of BIM visualisation and knowledge embedment will enhance these areas to refine and achieve better value for PPP procurement projects for the long term benefit especially during post-construction phase for the public sector. As of now there is no well-defined guidance with respect to BIM usage incorporating all of the above. Do we really need to revisit the way we specify projects within the contractual framework under PPP? This paper examines the possibility of how BIM can be utilised in the realisation of augmented formal database information management system under the PPP procurement routes with respect to operation and maintenance support. The paper concludes with additional measures that BIM can offer at the post-construction phase for the public sector at learning organisations
Reduction of time-resolved space-based CCD photometry developed for MOST Fabry Imaging data
The MOST (Microvariability & Oscillations of STars) satellite obtains
ultraprecise photometry from space with high sampling rates and duty cycles.
Astronomical photometry or imaging missions in low Earth orbits, like MOST, are
especially sensitive to scattered light from Earthshine, and all these missions
have a common need to extract target information from voluminous data cubes.
They consist of upwards of hundreds of thousands of two-dimensional CCD frames
(or sub-rasters) containing from hundreds to millions of pixels each, where the
target information, superposed on background and instrumental effects, is
contained only in a subset of pixels (Fabry Images, defocussed images,
mini-spectra). We describe a novel reduction technique for such data cubes:
resolving linear correlations of target and background pixel intensities. This
stepwise multiple linear regression removes only those target variations which
are also detected in the background. The advantage of regression analysis
versus background subtraction is the appropriate scaling, taking into account
that the amount of contamination may differ from pixel to pixel. The
multivariate solution for all pairs of target/background pixels is minimally
invasive of the raw photometry while being very effective in reducing
contamination due to, e.g., stray light. The technique is tested and
demonstrated with both simulated oscillation signals and real MOST photometry.Comment: 16 pages, 23 figure
The haemodynamic effects of collateral donation to a chronic total occlusion : implications for patient management
Physiological lesion assessment in the form of Fractional Flow Reserve (FFR) is now well established for the purpose of guiding multi-vessel revascularization. Chronic total coronary occlusions are frequently associated with multi-vessel disease and the collateral dependent myocardium distal to the occlusion is often supplied by a collateral supply from another epicardial coronary artery. The haemodynamic effect of collateral donation upon collateral donor vessel flow may have important implications for the vessel's FFR; rendering it unreliable at predicting ischaemia should the CTO be revascularized. As a consequence, in the setting of multi-vessel disease, optimal revascularization strategy might be altered. There is a paucity of work in the medical literature directly examining this phenomenon. We endeavoured to review the existing literature related to it, to summarise from current knowledge of coronary physiology what is known about the potential effects of CTO revascularization on both collateral flow and collateral donor vessel physiology, and to highlight where further studies might inform practice
Recommended from our members
Supporting Theoretically-grounded Model Building in the Social Sciences through Interactive Visualisation
The primary purpose for which statistical models are employed in the social sciences is to understand and explain phenomena occurring in the world around us. In order to be scientifically valid and actionable, the construction of such models need to be strongly informed by theory. To accomplish this, there is a need for methodologies that can enable scientists to utilise their domain knowledge effectively even in the absence of strong a priori hypotheses or whilst dealing with complex datasets containing hundreds of variables and leading to large numbers of potential models. In this paper, we describe enhanced model building processes in which we use interactive visualisations as the underlying mechanism to facilitate the construction and documentation of theory-driven models. We report our observations from a collaborative project involving social and computer scientists, and identify key roles for visualisation to support model building within the context of social science. We describe a suite of techniques to facilitate the exploration of statistical summaries of input variables, to compare the quality of alternative models, and to keep track of the model-building process. We demonstrate how these techniques operate in coordination to allow social scientists to efficiently generate models that are tightly underpinned by domain specific theory
Deep Gaussian Processes
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a
deep belief network based on Gaussian process mappings. The data is modeled as
the output of a multivariate GP. The inputs to that Gaussian process are then
governed by another GP. A single layer model is equivalent to a standard GP or
the GP latent variable model (GP-LVM). We perform inference in the model by
approximate variational marginalization. This results in a strict lower bound
on the marginal likelihood of the model which we use for model selection
(number of layers and nodes per layer). Deep belief networks are typically
applied to relatively large data sets using stochastic gradient descent for
optimization. Our fully Bayesian treatment allows for the application of deep
models even when data is scarce. Model selection by our variational bound shows
that a five layer hierarchy is justified even when modelling a digit data set
containing only 150 examples.Comment: 9 pages, 8 figures. Appearing in Proceedings of the 16th
International Conference on Artificial Intelligence and Statistics (AISTATS)
201
- âŠ