291,261 research outputs found
Probabilistic Scheduling Based On Hybrid Bayesian Network–Program Evaluation Review Technique,
Project scheduling based on probabilistic methods commonly uses the Program Evaluation Review Technique (PERT). However, practitioners do not widely utilize PERT-based scheduling due to the difficulty in obtaining historical data for similar projects. PERT has several drawbacks, such as the inability to update activity dura- tions in real time. In reality, changes in project conditions related to resources have a highly dynamic nature. The availability of materials, fluctuating labor productiv- ity, and equipment significantly determine the project completion time. This research aims to propose a probabilistic scheduling model based on the Hybrid Bayesian Network-PERT. This model combines PERT with Bayesian Network (BN). BN is used to accommodate real-time changes in resource conditions. The modeling of BN diagrams and variables is obtained through an in-depth literature review, direct field observations, and distributing questionnaires to experts in project scheduling. The model is validated by applying the proposed model to a 60 m concrete bridge construction project in Indonesia. The simulation results of the proposed model are then compared with the case study project to assess the model’s accuracy. The result of the study shows that the proposed hybrid Bayesian-PERT model is accurate and can eliminate the weaknesses of the PERT method. Besides being able to provide an accurate prediction of project completion time (93.4%), this model can also be updated in real-time according to the actual condition of the projec
Recommended from our members
Hardware accelerated computer graphics algorithms
The advent of shaders in the latest generations of graphics hardware, which has made consumer level graphics hardware partially programmable, makes now an ideal time to investigate new graphical techniques and algorithms as well as attempting to improve upon existing ones.
This work looks at areas of current interest within the graphics community such as Texture Filtering, Bump Mapping and Depth of Field simulation. These are all areas which have enjoyed much interest over the history of computer graphics but which provide a great deal of scope for further investigation in the light of recent hardware advances.
A new hardware implementation of a texture filtering technique, aimed at consumer level hardware, is presented. This novel technique utilises Fourier space image filtering to reduce aliasing. Investigation shows that the technique provides reduced levels of aliasing along with comparable levels of detail to currently popular techniques. This adds to the community's knowledge by expanding the range of techniques available, as well as increasing the number of techniques which offer the potential for easy integration with current consumer level graphics hardware along with real-time performance.
Bump mapping is a long-standing and well understood technique. Variations and extensions of it have been popular in real-time 3D computer graphics for many years. A new hardware implementation of a technique termed Super Bump Mapping (SBM) is introduced. Expanding on the work of Cant and Langensiepen [1], the SBM technique adopts the novel approach of using normal maps which supply multiple vectors per texel. This allows the retention of much more detail and overcomes some of the aliasing deficiencies of standard bump mapping caused by the standard single vector approach and the non-linearity of the bump mapping process.
A novel depth of field algorithm is proposed, which is an extension of the authors previous work [2][3][4]. The technique is aimed at consumer level hardware and attempts to raise the bar for realism by providing support for the 'see-through' effect. This effect is a vital factor in the realistic appearance of simulated depth of field and has been overlooked in real time computer graphics due to the complexities of an accurate calculation. The implementation of this new algorithm on current consumer level hardware is investigated and it is concluded that while current hardware is not yet capable enough, future iterations will provide the necessary functional and performance increases
Redshift Space Distortion of the 21cm Background from the Epoch of Reionization I: Methodology Re-examined
The peculiar velocity of the intergalactic gas responsible for the cosmic 21cm background from the epoch of reionization and beyond introduces an anisotropy in the three-dimensional power spectrum of brightness temperature fluctuations. Measurement of this anisotropy by future 21cm surveys is a promising tool for separating cosmology from 21cm astrophysics. However, previous attempts to model the signal have often neglected peculiar velocity or only approximated it crudely. This paper presents a detailed treatment of the effects of peculiar velocity on the 21cm signal. (1) We show that properly accounting for finite optical depth eliminates the unphysical divergence of 21cm brightness temperature in the IGM overdense regions found in previous work that employed the usual optically-thin approximation. (2) We show that previous attempts to circumvent this divergence by capping the velocity gradient result in significant errors in the power spectrum on all scales. (3) We further show that the observed power spectrum in redshift-space remains finite even in the optically-thin approximation if one properly accounts for the redshift-space distortion. However, results that take full account of finite optical depth show that this approximation is only accurate in the limit of high spin temperature. (4) We also show that the linear theory for redshift-space distortion results in a ~30% error in the power spectrum at the observationally relevant wavenumber range, at the 50% ionized epoch. (5) We describe and test two numerical schemes to calculate the 21cm signal from reionization simulations which accurately incorporate peculiar velocity in the optically-thin approximation. One is particle-based, the other grid-based, and while the former is most accurate, we demonstrate that the latter is computationally more efficient and can achieve sufficient accuracy. [Abridged
Ideal magnetohydrodynamic simulations of unmagnetized dense plasma jet injection into a hot strongly magnetized plasma
We present results from three-dimensional ideal magnetohydrodynamic
simulations of unmagnetized dense plasma jet injection into a uniform hot
strongly magnetized plasma, with the aim of providing insight into core fueling
of a tokamak with parameters relevant for ITER and NSTX (National Spherical
Torus Experiment). Unmagnetized dense plasma jet injection is similar to
compact toroid injection but with much higher plasma density and total mass,
and consequently lower required injection velocity. Mass deposition of the jet
into the background appears to be facilitated via magnetic reconnection along
the jet's trailing edge. The penetration depth of the plasma jet into the
background plasma is mostly dependent on the jet's initial kinetic energy, and
a key requirement for spatially localized mass deposition is for the jet's
slowing-down time to be less than the time for the perturbed background
magnetic flux to relax due to magnetic reconnection. This work suggests that
more accurate treatment of reconnection is needed to fully model this problem.
Parameters for unmagnetized dense plasma jet injection are identified for
localized core deposition as well as edge localized mode (ELM) pacing
applications in ITER and NSTX-relevant regimes.Comment: 16 pages, 8 figures and 2 tables; accepted by Nuclear Fusion (May 11,
2011
Frequency-modulated continuous-wave LiDAR compressive depth-mapping
We present an inexpensive architecture for converting a frequency-modulated
continuous-wave LiDAR system into a compressive-sensing based depth-mapping
camera. Instead of raster scanning to obtain depth-maps, compressive sensing is
used to significantly reduce the number of measurements. Ideally, our approach
requires two difference detectors. % but can operate with only one at the cost
of doubling the number of measurments. Due to the large flux entering the
detectors, the signal amplification from heterodyne detection, and the effects
of background subtraction from compressive sensing, the system can obtain
higher signal-to-noise ratios over detector-array based schemes while scanning
a scene faster than is possible through raster-scanning. %Moreover, we show how
a single total-variation minimization and two fast least-squares minimizations,
instead of a single complex nonlinear minimization, can efficiently recover
high-resolution depth-maps with minimal computational overhead. Moreover, by
efficiently storing only data points from measurements of an
pixel scene, we can easily extract depths by solving only two linear equations
with efficient convex-optimization methods
Simulations for Multi-Object Spectrograph Planet Surveys
Radial velocity surveys for extra-solar planets generally require substantial
amounts of large telescope time in order to monitor a sufficient number of
stars. Two of the aspects which can limit such surveys are the single-object
capabilities of the spectrograph, and an inefficient observing strategy for a
given observing window. In addition, the detection rate of extra-solar planets
using the radial velocity method has thus far been relatively linear with time.
With the development of various multi-object Doppler survey instruments, there
is growing potential to dramatically increase the detection rate using the
Doppler method. Several of these instruments have already begun usage in large
scale surveys for extra-solar planets, such as FLAMES on the VLT and Keck ET on
the Sloan 2.5m wide-field telescope.
In order to plan an effective observing strategy for such a program, one must
examine the expected results based on a given observing window and target
selection. We present simulations of the expected results from a generic
multi-object survey based on calculated noise models and sensitivity for the
instrument and the known distribution of exoplanetary system parameters. We
have developed code for automatically sifting and fitting the planet candidates
produced by the survey to allow for fast follow-up observations to be
conducted. The techniques presented here may be applied to a wide range of
multi-object planet surveys.Comment: 15 pages, 10 figures, accepted for publication in MNRA
- …