385 research outputs found
Importance of Fluid Compressibility and Multi-Phase Flow in Numerical Modeling of Hydraulic Fracture Propagation
We employ the semi-analytical approach in modeling of coupled flow and geomechanics, where flow is solved numerically and geomechanics is solved analytically. We first model a PKN hydraulic fracture geometry numerically and incorporate the fluid compressibility term in order to investigate the effect of the fluid compressibility on hydraulic fracture geometry evolution. The results show that as the fluid becomes compressible, the fracture propagation is delayed because it takes time for pressure to be built up to extend the fracture. In a multi-phase flow system, we model a hydraulic fracturing process in a gas reservoir by solving flow numerically and geomechanics analytically. The fracture propagates slowly when water saturation of the reservoir is low. This implies high initial gas saturation, resulting in high total compressibility of reservoir fluid. We observe the gas concentration near the fracture tip, caused by (1) the movement of initial gas within the fracture to the fracture tip and (2) the possibility of the leakage of gas from the formation to the hydraulic fracture. The existence of gas is another factor that can lead fluid flow within the hydraulic fracture to be compressible
Evaluation of tire derived rubber particles as biofilter media and scale-up and design considerations for the Static Granular Bed Reactor (SGBR)
Research demonstrated three different bioreactors to evaluate use of tire rubber as biofilm attachment media in bioreactors for wastewater treatment: aerobic biofilter, anoxic bioreactor, and a hybrid anaerobic static granular bed reactor (SGBR). In addition, owing to the results from non-toxicity to microorganism and good surface area for biofilm attachment, size distribution, chemical composition, scanning electron microscopy, and whole effluent toxicity analyses verify the potential of TDRP (tire derived rubber particles) usage for biofilm attachment media. The trickling filter system using chunk rubber (average diameter of approximately 3 cm) achieved 79.6-90.1% COD removal efficiency at organic loading rates ranging from 0.12 kg COD/m3∙d to 0.34 kg COD/m3∙d. The hybrid SGBR and anoxic TDRP filter filled with fine rubber particles (average particle diameter of approximately 0.2 mm) achieved 90-97% of COD removal and above 97% of nitrogen removal, respectively at various hydraulic retention times of 48 to 20 h. The utility of TDRP media in multiple biofiltration applications was demonstrated by the performances of three TDRP biofilm media systems and analysis of TDRP characteristics. The biofilter system filled with TDRP filter media was utilized to treat the odorous gas contaminant, hydrogen sulfide. This bioreactor system achieved over 94% removal efficiency at 20-90 ppm of inflow H2S concentration while operating in 20-67 seconds of EBRTs, indicating that overall effective operation was performed at mass loading rates of H2S ranging from 19.6 to 28.5 g H2S/ m3 /hour. It was apparent by the effectiveness of the system’s performance that this system had the capability to hydrogen sulfide. Performance between the hybrid SGBR with the addition of TDRP and SGBR reactors was compared to validate the ability of TDRP media as a substitute for granules. Both systems showed similar high COD removal efficiencies (over 95%) at hydraulic retention times of 48 to 12 hours and resulting organic loading rates of 1 kg/m3/d to 4 kg/m3/d. The applicability of TDRP media to the bioreactor was also shown by the differences in performance between reactors with and without TDRP addition in the same granular sludge volume. An on-site pilot-scale SGBR system was evaluated for treating slaughterhouse wastewater from a food plant in Iowa to provide treatability and compared to other high-rate anaerobic systems and critical elements for commercialization. High organic removal efficiency (over 95% of TSS and VSS removal) was obtained due to the consistent treatability of SGBR system during operation at HRTs of 48, 36, 30, 24, and 20 hours. An effective backwash procedure was performed to waste a portion of the accumulated solids in the system. This procedure limited the increase in hydraulic head loss and maintained the system stability. COD removal efficiencies greater than 95% were achieved at organic loading rates ranging from 0.77 kg/m3/d to 12.76 kg/m3/d. This performance was consistently better than other high-rate anaerobic systems treating slaughterhouse wastewater
Evaluation of leachate treatment and recycle options using the Static Granular Bed Reactor
Three landfill leachate management strategies were evaluated by comparing simulated landfill columns while studying the application of the Static Granular Bed Reactor (SGBR) to leachate treatment. The three simulated landfill columns were operated in three different strategies. In column 1 (C1), the leachate was treated in the SGBR reactor and recycled to the top of C1. Column 2 (C2) recirculated the leachate without any treatment. Column 3 (C3) was a simulated conventional landfill without recirculation. With time, the COD concentration of leachate in each column decreased. C1 had its greatest reduction of COD in leachate due to removal in the SGBR and the landfill column itself. Moreover, gas production was accelerated by leachate recirculation owing to enhancement of waste degradation in landfill columns (C1 and C2). The SGBR pre-operating study showed fast acclimation (5 days) to substrate change and short start-up period (10 days) as evidenced by COD removal efficiencies ranging between 84% and 95% for leachate and non-fat dry milk. Incorporated in a leachate management strategy, the SGBR system was sustained in stability as evidenced by the stable pH and low VFA concentrations. Despite the low organic removal efficiency, the SGBR reactor treating leachate prior to recirculation in the simulated landfill column was effective at reducing the organic matter in leachate within the system. The feasibility of leachate treatment by the SGBR was demonstrated in this study
Recommended from our members
Probabilistic design for emerging memory and nanometer-scale logic
As semiconductor technology has scaled down, the impact of stochastic behavior in very large scale integrated circuits (VLSI) has become an ever-more important concern. This dissertation investigates two distinct classes of problems that require the use of probabilistic methods and models: (1) Modeling and exploiting stochastic behavior in advanced memory technologies; (2) Probabilistic modeling of faults due to on-chip voltage variation.
This dissertation first investigates the unique physics-level stochasticity of spin-transfer torque magnetic RAM (STT-RAM). The write process of STT-RAM is stochastic: specifically, the write time of a bitcell varies significantly. The wors-tcase approach, which uses the longest write pulse duration, guarantees a successful write; however, it introduces significant energy overhead due to excessive margins since the average write pulse duration is far shorter than the worst-case pulse duration. This dissertation develops novel circuit techniques to exploit the stochastic properties of STT-RAM write operation for energy savings by moving away from the worst-case approach to dynamic strategies while maintaining the required low error rate. The first contribution is a variable energy write (VEW) architecture that effectively exploits the wide distribution of write time to greatly reduce energy via a mechanism that checks the instantaneous state of the bitcell and deactivates the write current once the correct value has registered. The second contribution is a multiple attempt write (MAW) strategy that utilizes the asymptotic temporal stochastic independence of repeated switching events to achieve a dramatic reduction in energy. The proposed architectures are evaluated using a compact STT-RAM cell model. Analysis indicates that VEW succeeded in reducing the write energy by 94.7% with approximately 1% relative area overhead under an efficient design methodology compared with the conventional designs relying on the worst case approach. MAW reduced the overall write energy by 94.6% with approximately 0.05% relative area overhead.
This dissertation then addresses the problem of probabilistic modeling of faults due to on-chip voltage variations. The power supply voltage variation can increase gate delay, resulting in timing faults on near-critical paths. These low-level faults ultimately propagate to architecture and application levels, often leading to critical system failures. Developing an accurate fault model and injection tool that generates and propagates faults from circuit- to gate-level is important for accurately predicting the resulting system failures. This is challenging since the model needs to accurately capture the physical characteristics at the circuit level that define the likelihood of a fault and use that information to guide the injection with the proper probability. At the same time, the analysis and fault injections need to be computationally manageable to allow analysis of realistic systems under realistic workloads. The conventional fault models rely on either Monte Carlo sampling or time-consuming runtime simulation using the worst-case voltage drop. To overcome simulation overheads of runtime circuit-level simulation, a novel two-phase approach is proposed. The main idea is that circuit characterization can be done before simulation. The result of pre-characterization is used at runtime via a form of look-up to enable gate-level efficiency. The two-phase methodology is time-efficient but may require high memory unless the look-up tables are carefully optimized. This dissertation also develops the fault probability estimation based on workload-specific voltage distribution, rather than a fixed worst-case voltage. The proposed methodology is implemented on an OpenSPARC design targeting on a 32nm technology node. Analysis indicates the proposed fault modeling and injection flow reduces runtime overhead by 24X compared to the previously best-known gate-level fault simulator while having circuit level accuracy.Electrical and Computer Engineerin
Importance of Fluid Compressibility and Multi-Phase Flow in Numerical Modeling of Hydraulic Fracture Propagation
We employ the semi-analytical approach in modeling of coupled flow and geomechanics, where flow is solved numerically and geomechanics is solved analytically. We first model a PKN hydraulic fracture geometry numerically and incorporate the fluid compressibility term in order to investigate the effect of the fluid compressibility on hydraulic fracture geometry evolution. The results show that as the fluid becomes compressible, the fracture propagation is delayed because it takes time for pressure to be built up to extend the fracture. In a multi-phase flow system, we model a hydraulic fracturing process in a gas reservoir by solving flow numerically and geomechanics analytically. The fracture propagates slowly when water saturation of the reservoir is low. This implies high initial gas saturation, resulting in high total compressibility of reservoir fluid. We observe the gas concentration near the fracture tip, caused by (1) the movement of initial gas within the fracture to the fracture tip and (2) the possibility of the leakage of gas from the formation to the hydraulic fracture. The existence of gas is another factor that can lead fluid flow within the hydraulic fracture to be compressible
Learning a high-dimensional classification rule using auxiliary outcomes
Correlated outcomes are common in many practical problems. Based on a
decomposition of estimation bias into two types, within-subspace and
against-subspace, we develop a robust approach to estimating the classification
rule for the outcome of interest with the presence of auxiliary outcomes in
high-dimensional settings. The proposed method includes a pooled estimation
step using all outcomes to gain efficiency, and a subsequent calibration step
using only the outcome of interest to correct both types of biases. We show
that when the pooled estimator has a low estimation error and a sparse
against-subspace bias, the calibrated estimator can achieve a lower estimation
error than that when using only the single outcome of interest. An inference
procedure for the calibrated estimator is also provided. Simulations and a real
data analysis are conducted to justify the superiority of the proposed method.Comment: 19 pages, 3 figure
- …