1,353 research outputs found

    Gentrifying with family wealth:Parental gifts and neighbourhood sorting among young adult owner-occupants

    Get PDF
    This paper assesses the role of parental gifts in neighbourhood sorting among young adult homebuyers. We make use of high-quality individual-level registry data for two large urban metropolitan areas in the Netherlands. While previous studies have shown that young adults receiving gifts purchase more expensive housing, little is known about the role of gifts in where young adults buy. Our study finds that parental gifts flow into the housing market in a spatially-uneven way. Movers supported by substantial parental gifts are more likely to enter owner-occupied housing in high-status and gentrifying urban neighbourhoods compared to movers without gifts. This study shows that this can only partially be explained by household and parental characteristics and by the uneven distribution of housing values. The remaining effect suggests that parental gifts also play a role in trade-offs regarding spatial residential decision-making. The conclusion discusses the ramifications of our findings for debates on (re)production of class and intra-generational inequalities through housing, and provides avenues for further research

    DRA/NASA/ONERA Collaboration on Icing Research

    Get PDF
    This report presents results from a joint study by DRA, NASA, and ONERA for the purpose of comparing, improving, and validating the aircraft icing computer codes developed by each agency. These codes are of three kinds: (1) water droplet trajectory prediction, (2) ice accretion modeling, and (3) transient electrothermal deicer analysis. In this joint study, the agencies compared their code predictions with each other and with experimental results. These comparison exercises were published in three technical reports, each with joint authorship. DRA published and had first authorship of Part 1 - Droplet Trajectory Calculations, NASA of Part 2 - Ice Accretion Prediction, and ONERA of Part 3 - Electrothermal Deicer Analysis. The results cover work done during the period from August 1986 to late 1991. As a result, all of the information in this report is dated. Where necessary, current information is provided to show the direction of current research. In this present report on ice accretion, each agency predicted ice shapes on two dimensional airfoils under icing conditions for which experimental ice shapes were available. In general, all three codes did a reasonable job of predicting the measured ice shapes. For any given experimental condition, one of the three codes predicted the general ice features (i.e., shape, impingement limits, mass of ice) somewhat better than did the other two. However, no single code consistently did better than the other two over the full range of conditions examined, which included rime, mixed, and glaze ice conditions. In several of the cases, DRA showed that the user's knowledge of icing can significantly improve the accuracy of the code prediction. Rime ice predictions were reasonably accurate and consistent among the codes, because droplets freeze on impact and the freezing model is simple. Glaze ice predictions were less accurate and less consistent among the codes, because the freezing model is more complex and is critically dependent upon unsubstantiated heat transfer and surface roughness models. Thus, heat transfer prediction methods used in the codes became the subject for a separate study in this report to compare predicted heat transfer coefficients with a limited experimental database of heat transfer coefficients for cylinders with simulated glaze and rime ice shapes. The codes did a good job of predicting heat transfer coefficients near the stagnation region of the ice shapes. But in the region of the ice horns, all three codes predicted heat transfer coefficients considerably higher than the measured values. An important conclusion of this study is that further research is needed to understand the finer detail of of the glaze ice accretion process and to develop improved glaze ice accretion models

    Random Costs in Combinatorial Optimization

    Full text link
    The random cost problem is the problem of finding the minimum in an exponentially long list of random numbers. By definition, this problem cannot be solved faster than by exhaustive search. It is shown that a classical NP-hard optimization problem, number partitioning, is essentially equivalent to the random cost problem. This explains the bad performance of heuristic approaches to the number partitioning problem and allows us to calculate the probability distributions of the optimum and sub-optimum costs.Comment: 4 pages, Revtex, 2 figures (eps), submitted to PR

    Phase transition for cutting-plane approach to vertex-cover problem

    Full text link
    We study the vertex-cover problem which is an NP-hard optimization problem and a prototypical model exhibiting phase transitions on random graphs, e.g., Erdoes-Renyi (ER) random graphs. These phase transitions coincide with changes of the solution space structure, e.g, for the ER ensemble at connectivity c=e=2.7183 from replica symmetric to replica-symmetry broken. For the vertex-cover problem, also the typical complexity of exact branch-and-bound algorithms, which proceed by exploring the landscape of feasible configurations, change close to this phase transition from "easy" to "hard". In this work, we consider an algorithm which has a completely different strategy: The problem is mapped onto a linear programming problem augmented by a cutting-plane approach, hence the algorithm operates in a space OUTSIDE the space of feasible configurations until the final step, where a solution is found. Here we show that this type of algorithm also exhibits an "easy-hard" transition around c=e, which strongly indicates that the typical hardness of a problem is fundamental to the problem and not due to a specific representation of the problem.Comment: 4 pages, 3 figure

    Ocean chlorofluorocarbon and heat uptake during the twentieth century in the CCSM3

    Get PDF
    Author Posting. © American Meteorological Society 2006. This article is posted here by permission of American Meteorological Society for personal use, not for redistribution. The definitive version was published in Journal of Climate 19 (2006): 2366–2381, doi:10.1175/JCLI3758.1.An ensemble of nine simulations for the climate of the twentieth century has been run using the Community Climate System Model version 3 (CCSM3). Three of these runs also simulate the uptake of chlorofluorocarbon-11 (CFC-11) into the ocean using the protocol from the Ocean Carbon Model Intercomparison Project (OCMIP). Comparison with ocean observations taken between 1980 and 2000 shows that the global CFC-11 uptake is simulated very well. However, there are regional biases, and these are used to identify where too much deep-water formation is occurring in the CCSM3. The differences between the three runs simulating CFC-11 uptake are also briefly documented. The variability in ocean heat content in the 1870 control runs is shown to be only a little smaller than estimates using ocean observations. The ocean heat uptake between 1957 and 1996 in the ensemble is compared to the recent observational estimates of the secular trend. The trend in ocean heat uptake is considerably larger than the natural variability in the 1870 control runs. The heat uptake down to 300 m between 1957 and 1996 varies by a factor of 2 across the ensemble. Some possible reasons for this large spread are discussed. There is much less spread in the heat uptake down to 3 km. On average, the CCSM3 twentieth-century ensemble runs take up 25% more heat than the recent estimate from ocean observations. Possible explanations for this are that the model heat uptake is calculated over the whole ocean, and not just in the regions where there are many observations and that there is no parameterization of the indirect effects of aerosols in CCSM3.Support provided by the National Science Foundation, the Department of Energy, the Ministry of Education, Culture, Sports, Science and Technology, and the Earth Simulator Center of the Japan Agency for Marine- Earth Science and Technology

    Optimization by Quantum Annealing: Lessons from hard 3-SAT cases

    Full text link
    The Path Integral Monte Carlo simulated Quantum Annealing algorithm is applied to the optimization of a large hard instance of the Random 3-SAT Problem (N=10000). The dynamical behavior of the quantum and the classical annealing are compared, showing important qualitative differences in the way of exploring the complex energy landscape of the combinatorial optimization problem. At variance with the results obtained for the Ising spin glass and for the Traveling Salesman Problem, in the present case the linear-schedule Quantum Annealing performance is definitely worse than Classical Annealing. Nevertheless, a quantum cooling protocol based on field-cycling and able to outperform standard classical simulated annealing over short time scales is introduced.Comment: 10 pages, 6 figures, submitted to PR

    Phase Transition in Multiprocessor Scheduling

    Full text link
    The problem of distributing the workload on a parallel computer to minimize the overall runtime is known as Multiprocessor Scheduling Problem. It is NP-hard, but like many other NP-hard problems, the average hardness of random instances displays an ``easy-hard'' phase transition. The transition in Multiprocessor Scheduling can be analyzed using elementary notions from crystallography (Bravais lattices) and statistical mechanics (Potts vectors). The analysis reveals the control parameter of the transition and its critical value including finite size corrections. The transition is identified in the performance of practical scheduling algorithms.Comment: 6 pages, revtex

    Highlights of the Zeno Results from the USMP-2 Mission

    Get PDF
    The Zeno instrument, a High-precision, light-scattering spectrometer, was built to measure the decay rates of density fluctuations in xenon near its liquid-vapor critical point in the low-gravity environment of the U.S. Space Shuttle. Eliminating the severe density gradients created in a critical fluid by Earth's gravity, we were able to make measurements to within 100 microKelvin of the critical point. The instrument flew for fourteen days in March, 1994 on the Space Shuttle Columbia, STS-62 flight, as part of the very successful USMP-2 payload. We describe the instrument and document its performance on orbit, showing that it comfortably reached the desired 3 microKelvin temperature control of the sample. Locating the critical temperature of the sample on orbit was a scientific challenge; we discuss the advantages and short-comings of the two techniques we used. Finally we discuss problems encountered with making measurements of the turbidity of the sample, and close with the results of the measurement of the decay rates of the critical-point fluctuations

    Number partitioning as random energy model

    Full text link
    Number partitioning is a classical problem from combinatorial optimisation. In physical terms it corresponds to a long range anti-ferromagnetic Ising spin glass. It has been rigorously proven that the low lying energies of number partitioning behave like uncorrelated random variables. We claim that neighbouring energy levels are uncorrelated almost everywhere on the energy axis, and that energetically adjacent configurations are uncorrelated, too. Apparently there is no relation between geometry (configuration) and energy that could be exploited by an optimization algorithm. This ``local random energy'' picture of number partitioning is corroborated by numerical simulations and heuristic arguments.Comment: 8+2 pages, 9 figures, PDF onl
    • …
    corecore