2,567 research outputs found

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Calibration of Computational Models with Categorical Parameters and Correlated Outputs via Bayesian Smoothing Spline ANOVA

    Full text link
    It has become commonplace to use complex computer models to predict outcomes in regions where data does not exist. Typically these models need to be calibrated and validated using some experimental data, which often consists of multiple correlated outcomes. In addition, some of the model parameters may be categorical in nature, such as a pointer variable to alternate models (or submodels) for some of the physics of the system. Here we present a general approach for calibration in such situations where an emulator of the computationally demanding models and a discrepancy term from the model to reality are represented within a Bayesian Smoothing Spline (BSS) ANOVA framework. The BSS-ANOVA framework has several advantages over the traditional Gaussian Process, including ease of handling categorical inputs and correlated outputs, and improved computational efficiency. Finally this framework is then applied to the problem that motivated its design; a calibration of a computational fluid dynamics model of a bubbling fluidized which is used as an absorber in a CO2 capture system

    Computational Based Investigation of Lattice Cell Optimization under Uniaxial Compression Load

    Get PDF
    Structural optimization is a methodology used to generate novel structures within a design space by finding a maximum or minimum point within a set of constraints. Topology optimization, as a subset of structural optimization, is often used as a means for light-weighting a structure while maintaining mechanical performance. This article presents the mathematical basis for topology optimization, focused primarily on the Bi-directional Evolutionary Structural Optimization (BESO) and Solid Isotropic Material with Penalization (SIMP) methodologies, then applying the SIMP methodology to a case study of additively manufactured lattice cells. Three lattice designs were used: the Diamond, I-WP, and Primitive cells. These designs are all based on Triply Periodic Minimal Surfaces (TPMS). Individual lattice cells were subjected to a uniaxial compression load, then optimized for these load conditions. The optimized cells were then compared to the base cell designs, noting changes in the stress field response, and the maximum and minimum stress values. Overall, topology optimization proved its utility under this loading condition, with each cell seeing a net gain in performance when considering the volume reduction. The I-WP lattice saw a significant stress reduction in conjunction with the mass and volume reduction, marking a notable increase in cell performance

    The spin temperature of high-redshift damped Lyman-α\alpha systems

    Get PDF
    We report results from a programme aimed at investigating the temperature of neutral gas in high-redshift damped Lyman-α\alpha absorbers (DLAs). This involved (1) HI 21cm absorption studies of a large DLA sample, (2) VLBI studies to measure the low-frequency quasar core fractions, and (3) optical/ultraviolet spectroscopy to determine DLA metallicities and velocity widths. Including literature data, our sample consists of 37 DLAs with estimates of the spin temperature TsT_s and the covering factor. We find a strong 4σ4\sigma) difference between the TsT_s distributions in high-z (z>2.4) and low-z (z<2.4) DLA samples. The high-z sample contains more systems with high TsT_s values, 1000\gtrsim 1000 K. The TsT_s distributions in DLAs and the Galaxy are also clearly (~6σ6\sigma) different, with more high-TsT_s sightlines in DLAs than in the Milky Way. The high TsT_s values in the high-z DLAs of our sample arise due to low fractions of the cold neutral medium. For 29 DLAs with metallicity [Z/H] estimates, we confirm the presence of an anti-correlation between TsT_s and [Z/H], at 3.5σ3.5\sigma significance via a non-parametric Kendall-tau test. This result was obtained with the assumption that the DLA covering factor is equal to the core fraction. Monte Carlo simulations show that the significance of the result is only marginally decreased if the covering factor and the core fraction are uncorrelated, or if there is a random error in the inferred covering factor. We also find evidence for redshift evolution in DLA TsT_s values even for the z>1 sub-sample. Since z>1 DLAs have angular diameter distances comparable to or larger than those of the background quasars, they have similar efficiency in covering the quasars. Low covering factors in high-z DLAs thus cannot account for the observed redshift evolution in spin temperatures. (Abstract abridged.)Comment: 37 pages, 22 figures. Accepted for publication in Monthly Notices of the Royal Astronomical Societ

    Smoke-Free Policy in Vermont Public Housing Authorities

    Get PDF
    Introduction. Millions of adults and children living in public housing face exposure to second hand smoke from adjacent apartments. These tenants are less able to escape smoke exposure by moving, and Housing Authorities are beginning to implement smoke-free policies. We assessed the status of smoke-free policy in Vermont public housing, and explored the experience of tenants and managers in Burlington who recently implemented such a policy.https://scholarworks.uvm.edu/comphp_gallery/1080/thumbnail.jp

    The HP0256 gene product is involved in motility and cell envelope architecture of Helicobacter pylori

    Get PDF
    Background: Helicobacter pylori is the causative agent for gastritis, and peptic and duodenal ulcers. The bacterium displays 5-6 polar sheathed flagella that are essential for colonisation and persistence in the gastric mucosa. The biochemistry and genetics of flagellar biogenesis in H. pylori has not been fully elucidated. Bioinformatics analysis suggested that the gene HP0256, annotated as hypothetical, was a FliJ homologue. In Salmonella, FliJ is a chaperone escort protein for FlgN and FliT, two proteins that themselves display chaperone activity for components of the hook, the rod and the filament. Results: Ablation of the HP0256 gene in H. pylori significantly reduced motility. However, flagellin and hook protein synthesis was not affected in the HP0256 mutant. Transmission electron transmission microscopy revealed that the HP0256 mutant cells displayed a normal flagellum configuration, suggesting that HP0256 was not essential for assembly and polar localisation of the flagella in the cell. Interestingly, whole genome microarrays of an HP0256 mutant revealed transcriptional changes in a number of genes associated with the flagellar regulon and the cell envelope, such as outer membrane proteins and adhesins. Consistent with the array data, lack of the HP0256 gene significantly reduced adhesion and the inflammatory response in host cells. Conclusions: We conclude that HP0256 is not a functional counterpart of FliJ in H. pylori. However, it is required for full motility and it is involved, possibly indirectly, in expression of outer membrane proteins and adhesins involved in pathogenesis and adhesion

    Economic evaluation of access to musculoskeletal care: The case of waiting for total knee arthroplasty

    Get PDF
    BACKGROUND: The projected demand for total knee arthroplasty is staggering. At its root, the solution involves increasing supply or decreasing demand. Other developed nations have used rationing and wait times to distribute this service. However, economic impact and cost-effectiveness of waiting for TKA is unknown. METHODS: A Markov decision model was constructed for a cost-utility analysis of three treatment strategies for end-stage knee osteoarthritis: 1) TKA without delay, 2) a waiting period with no non-operative treatment and 3) a non-operative treatment bridge during that waiting period in a cohort of 60 year-old patients. Outcome probabilities and effectiveness were derived from the literature. Costs were estimated from the societal perspective with national average Medicare reimbursement. Effectiveness was expressed in quality-adjusted life years (QALYs) gained. Principal outcome measures were average incremental costs, effectiveness, and quality-adjusted life years; and net health benefits. RESULTS: In the base case, a 2-year wait-time both with and without a non-operative treatment bridge resulted in a lower number of average QALYs gained (11.57 (no bridge) and 11.95 (bridge) vs. 12.14 (no delay). The average cost was 1,660higherforTKAwithoutdelaythanwaittimewithnobridge,but1,660 higher for TKA without delay than wait-time with no bridge, but 1,810 less than wait-time with non-operative bridge. The incremental cost-effectiveness ratio comparing wait-time with no bridge to TKA without delay was $2,901/QALY. When comparing TKA without delay to waiting with non-operative bridge, TKA without delay produced greater utility at a lower cost to society. CONCLUSIONS: TKA without delay is the preferred cost-effective treatment strategy when compared to a waiting for TKA without non-operative bridge. TKA without delay is cost saving when a non-operative bridge is used during the waiting period. As it is unlikely that patients waiting for TKA would not receive non-operative treatment, TKA without delay may be an overall cost-saving health care delivery strategy. Policies aimed at increasing the supply of TKA should be considered as savings exist that could indirectly fund those strategies

    Visible light carrier generation in co-doped epitaxial titanate films

    Full text link
    Perovskite titanates such as SrTiO3_{3} (STO) exhibit a wide range of important functional properties, including high electron mobility, ferroelectricity, and excellent photocatalytic performance. The wide optical band gap of titanates limits their use in these applications, however, making them ill-suited for integration into solar energy harvesting technologies. Our recent work has shown that by doping STO with equal concentrations of La and Cr we can enhance visible light absorption in epitaxial thin films while avoiding any compensating defects. In this work, we explore the optical properties of photoexcited carriers in these films. Using spectroscopic ellipsometry, we show that the Cr3+^{3+} dopants, which produce electronic states immediately above the top of the O 2p valence band in STO reduce the direct band gap of the material from 3.75 eV to between 2.4 and 2.7 eV depending on doping levels. Transient reflectance spectroscopy measurements are in agreement with the observations from ellipsometry and confirm that optically generated carriers are present for longer than 2 ns. Finally, through photoelectrochemical methylene blue degradation measurements, we show that these co-doped films exhibit enhanced visible light photocatalysis when compared to pure STO.Comment: 19 pages including supplement, 8 figures (3 main, 5 supplement

    Reconciling the local galaxy population with damped Ly-alpha cross sections and metal abundances

    Get PDF
    A comprehensive analysis of 355 high-quality WSRT HI 21-cm line maps of nearby galaxies shows that the properties and incident rate of Damped Lyman-alpha (DLA) absorption systems observed in the spectra of high redshift QSOs are in good agreement with DLAs originating in gas disks of galaxies like those in the z~0 population. Comparison of low-z DLA statistics with the HI incidence rate and column density distribution f(N) for the local galaxy sample shows no evidence for evolution in the integral "cross section density" below z~1.5, implying that there is no need for a hidden population of galaxies or HI clouds to contribute significantly to the DLA cross section. Compared with z~4, our data indicates evolution of a factor of two in the comoving density along a line of sight. We find that dN/dz(z=0)=0.045 +/- 0.006. The idea that the local galaxy population can explain the DLAs is further strengthened by comparing the properties of DLAs and DLA galaxies with the expectations based on our analysis of local galaxies. The distribution of luminosities of DLA host galaxies, and of impact parameters between QSOs and the centres of DLA galaxies, are in good agreement with what is expected from local galaxies. Approximately 87% of low z DLA galaxies are expected to be fainter than L* and 37 per cent have impact parameters less than 1'' at z=0.5. The analysis shows that some host galaxies with very low impact parameters and low luminosities are expected to be missed in optical follow up surveys. The well-known metallicity-luminosity relation in galaxies, in combination with metallicity gradients in galaxy disks, cause the expected median metallicity of low redshift DLAs to be low (~1/7 solar), which is also in good agreement with observations of low z DLAs. (Abridged)Comment: 22 pages, 22 figures. Accepted for publication in MNRAS. Fixed typo
    corecore