1,473 research outputs found
Synchronized sweep algorithms for scalable scheduling constraints
This report introduces a family of synchronized sweep based filtering
algorithms for handling scheduling problems involving resource and
precedence constraints. The key idea is to filter all constraints of a
scheduling problem in a synchronized way in order to scale better. In
addition to normal filtering mode, the algorithms can run in greedy
mode, in which case they perform a greedy assignment of start and end
times. The filtering mode achieves a significant speed-up over the
decomposition into independent cumulative and precedence constraints,
while the greedy mode can handle up to 1 million tasks with 64 resources
constraints and 2 million precedences. These algorithms were implemented
in both CHOCO and SICStus
Synchronized sweep algorithms for scalable scheduling constraints
This report introduces a family of synchronized sweep based filtering
algorithms for handling scheduling problems involving resource and
precedence constraints. The key idea is to filter all constraints of a
scheduling problem in a synchronized way in order to scale better. In
addition to normal filtering mode, the algorithms can run in greedy
mode, in which case they perform a greedy assignment of start and end
times. The filtering mode achieves a significant speed-up over the
decomposition into independent cumulative and precedence constraints,
while the greedy mode can handle up to 1 million tasks with 64 resources
constraints and 2 million precedences. These algorithms were implemented
in both CHOCO and SICStus
Self-decomposable Global Constraints
International audienceScalability becomes more and more critical to decision support technologies. In order to address this issue in Constraint Programming, we introduce the family of self-decomposable constraints. These constraints can be satisfied by applying their own filtering algorithms on variable subsets only. We introduce a generic framework which dynamically decompose propagation, by filtering over variable subsets. Our experiments over the CUMULATIVE constraint illustrate the practical relevance of self-decomposition
Improved Well Boundary Conditions: Automated Adaptation of Numerical Well Controls in Reservoir Simulation Models
Wells in reservoir simulation models are set using constant boundary conditions. This results in producers being shut due to high water or gas production. In actual field operations, well flow rates and pressures are adjusted to control high water and gas production. This thesis introduces a novel yet simple method that automates adaptation of well conditions to the dynamic wellbore or near wellbore (reservoir) performance. The adaptive conditions algorithm was incorporated within a developed 3D three-phase black-oil reservoir simulator. Several 1D wellbore model candidates were compared against 3D computational fluid dynamics models in predicting published two-phase pipe flow results.
Improvement in oil recovery, increase in well operational lifetime, and reduction in produced water and gas were all observed when using adaptive well controls. This thesis can result in a positive outlook when assessing business plans compared to the cost associated with early abandonment of wells. One of the many advantages of this methodology is the reduction in number of variables for optimization studies due to the elimination of rate control steps.
Methods of maximizing reservoir net present value via production rate optimizations are limiting. The optimization problem requires setting the variables beforehand and doing so limits solutions to a predefined number of rate changes at exact and specific times. Integrating adaptive well controls to an optimization study increased convergence rates and enhanced the optimized solution. This is due to the well rates being able to automatically adapt to reservoir/well performances thereby having unbounded access to rate changes which would have been numerically expensive to include in rate optimization setups.
Comparison of conversion times showed that optimization runs using adaptive well conditions converging earlier than the base case. Furthermore, the adaptive case required less than half the number of generations to produce an improved maximum NPV compared the base case. Several studies were performed using a three-phase reservoir model with six production wells and seven water injectors. In all optimization cases, the maximum NPV for the respective base model was consistently lower than the NPV of the first generation for all the different adaptive rate models
Aeronautical Engineering: A special bibliography with indexes, supplement 55
This bibliography lists 260 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1975
3D city scale reconstruction using wide area motion imagery
3D reconstruction is one of the most challenging but also most necessary part of computer vision. It is generally applied everywhere, from remote sensing to medical imaging and multimedia. Wide Area Motion Imagery is a field that has gained traction over the recent years. It consists in using an airborne large field of view sensor to cover a typically over a square kilometer area for each captured image. This is particularly valuable data for analysis but the amount of information is overwhelming for any human analyst. Algorithms to efficiently and automatically extract information are therefore needed and 3D reconstruction plays a critical part in it, along with detection and tracking. This dissertation work presents novel reconstruction algorithms to compute a 3D probabilistic space, a set of experiments to efficiently extract photo realistic 3D point clouds and a range of transformations for possible applications of the generated 3D data to filtering, data compression and mapping. The algorithms have been successfully tested on our own datasets provided by Transparent Sky and this thesis work also proposes methods to evaluate accuracy, completeness and photo-consistency. The generated data has been successfully used to improve detection and tracking performances, and allows data compression and extrapolation by generating synthetic images from new point of view, and data augmentation with the inferred occlusion areas.Includes bibliographical reference
Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors
Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds.
The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets.
The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model.
Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding.
A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity
Recommended from our members
Efficient Acoustic Simulation for Immersive Media and Digital Fabrication
Sound is a crucial part of our life. Well-designed acoustic behaviors can lead to significant improvement in both physical and virtual interactions. In computer graphics, most existing methods focused primarily on improving the accuracy. It remained underexplored on how to develop efficient acoustic simulation algorithms for interactive practical applications.
The challenges arise from the dilemma between expensive accurate simulations and fast feedback demanded by intuitive user interaction: traditional physics-based acoustic simulations are computationally expensive; yet, for end users to benefit from the simulations, it is crucial to give prompt feedback during interactions.
In this thesis, I investigate how to develop efficient acoustic simulations for real-world applications such as immersive media and digital fabrication. To address the above-mentioned challenges, I leverage precomputation and optimization to significantly improve the speed while preserving the accuracy of complex acoustic phenomena. This work discusses three efforts along this research direction: First, to ease sound designer's workflow, we developed a fast keypoint-based precomputation algorithm to enable interactive acoustic transfer values in virtual sound simulations. Second, for realistic audio editing in 360° videos, we proposed an inverse material optimization based on fast sound simulation and a hybrid ambisonic audio synthesis that exploits the directional isotropy in spatial audios. Third, we devised a modular approach to efficiently simulate and optimize fabrication-ready acoustic filters, achieving orders of magnitudes speedup while maintaining the simulation accuracy. Through this series of projects, I demonstrate a wide range of applications made possible by efficient acoustic simulations
Shuttle Ku-band and S-band communications implementation study
Various aspects of the shuttle orbiter S-band network communication system, the S-band payload communication system, and the Ku-band communication system are considered. A method is proposed for obtaining more accurate S-band antenna patterns of the actual shuttle orbiter vehicle during flight because the preliminary antenna patterns using mock-ups are not realistic that they do not include the effects of additional appendages such as wings and tail structures. The Ku-band communication system is discussed especially the TDRS antenna pointing accuracy with respect to the orbiter and the modifications required and resulting performance characteristics of the convolutionally encoded high data rate return link to maintain bit synchronizer lock on the ground. The TDRS user constraints on data bit clock jitter and data asymmetry on unbalanced QPSK with noisy phase references are included. The S-band payload communication system study is outlined including the advantages and experimental results of a peak regulator design built and evaluated by Axiomatrix for the bent-pipe link versus the existing RMS-type regulator. The nominal sweep rate for the deep-space transponder of 250 Hz/s, and effects of phase noise on the performance of a communication system are analyzed
- …