841 research outputs found

    A Proper Motion Survey for White Dwarfs with the Wide Field Planetary Camera 2

    Full text link
    We have performed a search for halo white dwarfs as high proper motion objects in a second epoch WFPC2 image of the Groth-Westphal strip. We identify 24 high proper motion objects with mu > 0.014 ''/yr. Five of these high proper motion objects are identified as strong white dwarf candidates on the basis of their position in a reduced proper motion diagram. We create a model of the Milky Way thin disk, thick disk and stellar halo and find that this sample of white dwarfs is clearly an excess above the < 2 detections expected from these known stellar populations. The origin of the excess signal is less clear. Possibly, the excess cannot be explained without invoking a fourth galactic component: a white dwarf dark halo. We present a statistical separation of our sample into the four components and estimate the corresponding local white dwarf densities using only the directly observable variables, V, V-I, and mu. For all Galactic models explored, our sample separates into about 3 disk white dwarfs and 2 halo white dwarfs. However, the further subdivision into the thin and thick disk and the stellar and dark halo, and the subsequent calculation of the local densities are sensitive to the input parameters of our model for each Galactic component. Using the lowest mean mass model for the dark halo we find a 7% white dwarf halo and six times the canonical value for the thin disk white dwarf density (at marginal statistical significance), but possible systematic errors due to uncertainty in the model parameters likely dominate these statistical error bars. The white dwarf halo can be reduced to around 1.5% of the halo dark matter by changing the initial mass function slightly. The local thin disk white dwarf density in our solution can be made consistent with the canonical value by assuming a larger thin disk scaleheight of 500 pc.Comment: revised version, accepted by ApJ, results unchanged, discussion expande

    Agent based mobile negotiation for personalized pricing of last minute theatre tickets

    Get PDF
    This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2012 Elsevier B.V.This paper proposes an agent based mobile negotiation framework for personalized pricing of last minutes theatre tickets whose values are dependent on the time remaining to the performance and the locations of potential customers. In particular, case based reasoning and fuzzy cognitive map techniques are adopted in the negotiation framework to identify the best initial offer zone and adopt multi criteria decision in the scoring function to evaluate offers. The proposed framework is tested via a computer simulation in which personalized pricing policy shows higher market performance than other policies therefore the validity of the proposed negotiation framework.The Ministry of Education, Science and Technology (Korea

    Multi-agent knowledge integration mechanism using particle swarm optimization

    Get PDF
    This is the post-print version of the final paper published in Technological Forecasting and Social Change. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2011 Elsevier B.V.Unstructured group decision-making is burdened with several central difficulties: unifying the knowledge of multiple experts in an unbiased manner and computational inefficiencies. In addition, a proper means of storing such unified knowledge for later use has not yet been established. Storage difficulties stem from of the integration of the logic underlying multiple experts' decision-making processes and the structured quantification of the impact of each opinion on the final product. To address these difficulties, this paper proposes a novel approach called the multiple agent-based knowledge integration mechanism (MAKIM), in which a fuzzy cognitive map (FCM) is used as a knowledge representation and storage vehicle. In this approach, we use particle swarm optimization (PSO) to adjust causal relationships and causality coefficients from the perspective of global optimization. Once an optimized FCM is constructed an agent based model (ABM) is applied to the inference of the FCM to solve real world problem. The final aggregate knowledge is stored in FCM form and is used to produce proper inference results for other target problems. To test the validity of our approach, we applied MAKIM to a real-world group decision-making problem, an IT project risk assessment, and found MAKIM to be statistically robust.Ministry of Education, Science and Technology (Korea

    Pathologic Correlation of PET-CT Based Auto Contouring for Radiation Planning in Lung Cancer

    Get PDF
    Purpose/Objective(s): Radiation therapy in lung cancer relies on CT and functional imaging (FDG-PET) to delineate tumor volumes. Semi-automatic contouring tools have been developed for PET to improve on the inter-observer bias of manual contouring and intrinsic differences in imaging equipment. A common method involves using a threshold at a given percentage of the max activity, which may be less accurate with smaller tumors and tumors with low source to background ratio. To overcome this deficiency, a gradient algorithm, which detects changes in image counts at the border of the tumor, has been developed. Few studies have correlated these methods to pathological specimens. American Society for Therapeutic Radiation Oncology (ASTRO) 52nd Annual Meeting October 31 - November 4, San Diego, C

    Genetic Variation in an Individual Human Exome

    Get PDF
    There is much interest in characterizing the variation in a human individual, because this may elucidate what contributes significantly to a person's phenotype, thereby enabling personalized genomics. We focus here on the variants in a person's ‘exome,’ which is the set of exons in a genome, because the exome is believed to harbor much of the functional variation. We provide an analysis of the ∼12,500 variants that affect the protein coding portion of an individual's genome. We identified ∼10,400 nonsynonymous single nucleotide polymorphisms (nsSNPs) in this individual, of which ∼15–20% are rare in the human population. We predict ∼1,500 nsSNPs affect protein function and these tend be heterozygous, rare, or novel. Of the ∼700 coding indels, approximately half tend to have lengths that are a multiple of three, which causes insertions/deletions of amino acids in the corresponding protein, rather than introducing frameshifts. Coding indels also occur frequently at the termini of genes, so even if an indel causes a frameshift, an alternative start or stop site in the gene can still be used to make a functional protein. In summary, we reduced the set of ∼12,500 nonsilent coding variants by ∼8-fold to a set of variants that are most likely to have major effects on their proteins' functions. This is our first glimpse of an individual's exome and a snapshot of the current state of personalized genomics. The majority of coding variants in this individual are common and appear to be functionally neutral. Our results also indicate that some variants can be used to improve the current NCBI human reference genome. As more genomes are sequenced, many rare variants and non-SNP variants will be discovered. We present an approach to analyze the coding variation in humans by proposing multiple bioinformatic methods to hone in on possible functional variation

    The MACHO Project HST Follow-Up: The Large Magellanic Cloud Microlensing Source Stars

    Full text link
    We present Hubble Space Telescope (HST) WFPC2 photometry of 13 microlensed source stars from the 5.7 year Large Magellanic Cloud (LMC) survey conducted by the MACHO Project. The microlensing source stars are identified by deriving accurate centroids in the ground-based MACHO images using difference image analysis (DIA) and then transforming the DIA coordinates to the HST frame. None of these sources is coincident with a background galaxy, which rules out the possibility that the MACHO LMC microlensing sample is contaminated with misidentified supernovae or AGN in galaxies behind the LMC. This supports the conclusion that the MACHO LMC microlensing sample has only a small amount of contamination due to non-microlensing forms of variability. We compare the WFPC2 source star magnitudes with the lensed flux predictions derived from microlensing fits to the light curve data. In most cases the source star brightness is accurately predicted. Finally, we develop a statistic which constrains the location of the Large Magellanic Cloud (LMC) microlensing source stars with respect to the distributions of stars and dust in the LMC and compare this to the predictions of various models of LMC microlensing. This test excludes at > 90% confidence level models where more than 80% of the source stars lie behind the LMC. Exotic models that attempt to explain the excess LMC microlensing optical depth seen by MACHO with a population of background sources are disfavored or excluded by this test. Models in which most of the lenses reside in a halo or spheroid distribution associated with either the Milky Way or the LMC are consistent which these data, but LMC halo or spheroid models are favored by the combined MACHO and EROS microlensing results.Comment: 28 pages with 10 included PDF figures, submitted to Ap
    corecore