21,965 research outputs found

    New Constructions of Zero-Correlation Zone Sequences

    Full text link
    In this paper, we propose three classes of systematic approaches for constructing zero correlation zone (ZCZ) sequence families. In most cases, these approaches are capable of generating sequence families that achieve the upper bounds on the family size (KK) and the ZCZ width (TT) for a given sequence period (NN). Our approaches can produce various binary and polyphase ZCZ families with desired parameters (N,K,T)(N,K,T) and alphabet size. They also provide additional tradeoffs amongst the above four system parameters and are less constrained by the alphabet size. Furthermore, the constructed families have nested-like property that can be either decomposed or combined to constitute smaller or larger ZCZ sequence sets. We make detailed comparisons with related works and present some extended properties. For each approach, we provide examples to numerically illustrate the proposed construction procedure.Comment: 37 pages, submitted to IEEE Transactions on Information Theor

    Design of sequences with good correlation properties

    Get PDF
    This thesis is dedicated to exploring sequences with good correlation properties. Periodic sequences with desirable correlation properties have numerous applications in communications. Ideally, one would like to have a set of sequences whose out-of-phase auto-correlation magnitudes and cross-correlation magnitudes are very small, preferably zero. However, theoretical bounds show that the maximum magnitudes of auto-correlation and cross-correlation of a sequence set are mutually constrained, i.e., if a set of sequences possesses good auto-correlation properties, then the cross-correlation properties are not good and vice versa. The design of sequence sets that achieve those theoretical bounds is therefore of great interest. In addition, instead of pursuing the least possible correlation values within an entire period, it is also interesting to investigate families of sequences with ideal correlation in a smaller zone around the origin. Such sequences are referred to as sequences with zero correlation zone or ZCZ sequences, which have been extensively studied due to their applications in 4G LTE and 5G NR systems, as well as quasi-synchronous code-division multiple-access communication systems. Paper I and a part of Paper II aim to construct sequence sets with low correlation within a whole period. Paper I presents a construction of sequence sets that meets the Sarwate bound. The construction builds a connection between generalised Frank sequences and combinatorial objects, circular Florentine arrays. The size of the sequence sets is determined by the existence of circular Florentine arrays of some order. Paper II further connects circular Florentine arrays to a unified construction of perfect polyphase sequences, which include generalised Frank sequences as a special case. The size of a sequence set that meets the Sarwate bound, depends on a divisor of the period of the employed sequences, as well as the existence of circular Florentine arrays. Paper III-VI and a part of Paper II are devoted to ZCZ sequences. Papers II and III propose infinite families of optimal ZCZ sequence sets with respect to some bound, which are used to eliminate interference within a single cell in a cellular network. Papers V, VI and a part of Paper II focus on constructions of multiple optimal ZCZ sequence sets with favorable inter-set cross-correlation, which can be used in multi-user communication environments to minimize inter-cell interference. In particular, Paper~II employs circular Florentine arrays and improves the number of the optimal ZCZ sequence sets with optimal inter-set cross-correlation property in some cases.Doktorgradsavhandlin

    Improved calibration of the radii of cool stars based on 3D simulations of convection: implications for the solar model

    Full text link
    Main sequence, solar-like stars (M < 1.5 Msun) have outer convective envelopes that are sufficiently thick to affect significantly their overall structure. The radii of these stars, in particular, are sensitive to the details of inefficient, super-adiabatic convection occurring in their outermost layers. The standard treatment of convection in stellar evolution models, based on the Mixing-Length Theory (MLT), provides only a very approximate description of convection in the super-adiabatic regime. Moreover, it contains a free parameter, alpha_MLT, whose standard calibration is based on the Sun, and is routinely applied to other stars ignoring the differences in their global parameters (e.g., effective temperature, gravity, chemical composition) and previous evolutionary history. In this paper, we present a calibration of alpha_MLT based on three-dimensional radiation-hydrodynamics (3D RHD) simulations of convection. The value of alpha_MLT is adjusted to match the specific entropy in the deep, adiabatic layers of the convective envelope to the corresponding value obtained from the 3D RHD simulations, as a function of the position of the star in the (log g, log T_eff) plane and its chemical composition. We have constructed a model of the present-day Sun using such entropy-based calibration. We find that its past luminosity evolution is not affected by the entropy calibration. The predicted solar radius, however, exceeds that of the standard model during the past several billion years, resulting in a lower surface temperature. This illustrative calculation also demonstrates the viability of the entropy approach for calibrating the radii of other late-type stars.Comment: 16 pages, 14 figures, accepted for publication in the Astrophysical Journa

    Efficient treatment and quantification of uncertainty in probabilistic seismic hazard and risk analysis

    Get PDF
    The main goals of this thesis are the development of a computationally efficient framework for stochastic treatment of various important uncertainties in probabilistic seismic hazard and risk assessment, its application to a newly created seismic risk model of Indonesia, and the analysis and quantification of the impact of these uncertainties on the distribution of estimated seismic losses for a large number of synthetic portfolios modeled after real-world counterparts. The treatment and quantification of uncertainty in probabilistic seismic hazard and risk analysis has already been identified as an area that could benefit from increased research attention. Furthermore, it has become evident that the lack of research considering the development and application of suitable sampling schemes to increase the computational efficiency of the stochastic simulation represents a bottleneck for applications where model runtime is an important factor. In this research study, the development and state of the art of probabilistic seismic hazard and risk analysis is first reviewed and opportunities for improved treatment of uncertainties are identified. A newly developed framework for the stochastic treatment of portfolio location uncertainty as well as ground motion and damage uncertainty is presented. The framework is then optimized with respect to computational efficiency. Amongst other techniques, a novel variance reduction scheme for portfolio location uncertainty is developed. Furthermore, in this thesis, some well-known variance reduction schemes such as Quasi Monte Carlo, Latin Hypercube Sampling and MISER (locally adaptive recursive stratified sampling) are applied for the first time to seismic hazard and risk assessment. The effectiveness and applicability of all used schemes is analyzed. Several chapters of this monograph describe the theory, implementation and some exemplary applications of the framework. To conduct these exemplary applications, a seismic hazard model for Indonesia was developed and used for the analysis and quantification of loss uncertainty for a large collection of synthetic portfolios. As part of this work, the new framework was integrated into a probabilistic seismic hazard and risk assessment software suite developed and used by Munich Reinsurance Group. Furthermore, those parts of the framework that deal with location and damage uncertainties are also used by the flood and storm natural catastrophe model development groups at Munich Reinsurance for their risk models

    Efficient treatment and quantification of uncertainty in probabilistic seismic hazard and risk analysis

    Get PDF
    The main goals of this thesis are the development of a computationally efficient framework for stochastic treatment of various important uncertainties in probabilistic seismic hazard and risk assessment, its application to a newly created seismic risk model of Indonesia, and the analysis and quantification of the impact of these uncertainties on the distribution of estimated seismic losses for a large number of synthetic portfolios modeled after real-world counterparts. The treatment and quantification of uncertainty in probabilistic seismic hazard and risk analysis has already been identified as an area that could benefit from increased research attention. Furthermore, it has become evident that the lack of research considering the development and application of suitable sampling schemes to increase the computational efficiency of the stochastic simulation represents a bottleneck for applications where model runtime is an important factor. In this research study, the development and state of the art of probabilistic seismic hazard and risk analysis is first reviewed and opportunities for improved treatment of uncertainties are identified. A newly developed framework for the stochastic treatment of portfolio location uncertainty as well as ground motion and damage uncertainty is presented. The framework is then optimized with respect to computational efficiency. Amongst other techniques, a novel variance reduction scheme for portfolio location uncertainty is developed. Furthermore, in this thesis, some well-known variance reduction schemes such as Quasi Monte Carlo, Latin Hypercube Sampling and MISER (locally adaptive recursive stratified sampling) are applied for the first time to seismic hazard and risk assessment. The effectiveness and applicability of all used schemes is analyzed. Several chapters of this monograph describe the theory, implementation and some exemplary applications of the framework. To conduct these exemplary applications, a seismic hazard model for Indonesia was developed and used for the analysis and quantification of loss uncertainty for a large collection of synthetic portfolios. As part of this work, the new framework was integrated into a probabilistic seismic hazard and risk assessment software suite developed and used by Munich Reinsurance Group. Furthermore, those parts of the framework that deal with location and damage uncertainties are also used by the flood and storm natural catastrophe model development groups at Munich Reinsurance for their risk models

    Topological Sector Fluctuations and Curie Law Crossover in Spin Ice

    Get PDF
    At low temperatures, a spin ice enters a Coulomb phase - a state with algebraic correlations and topologically constrained spin configurations. In Ho2Ti2O7, we have observed experimentally that this process is accompanied by a non-standard temperature evolution of the wave vector dependent magnetic susceptibility, as measured by neutron scattering. Analytical and numerical approaches reveal signatures of a crossover between two Curie laws, one characterizing the high temperature paramagnetic regime, and the other the low temperature topologically constrained regime, which we call the spin liquid Curie law. The theory is shown to be in excellent agreement with neutron scattering experiments. On a more general footing, i) the existence of two Curie laws appears to be a general property of the emergent gauge field for a classical spin liquid, and ii) sheds light on the experimental difficulty of measuring a precise Curie-Weiss temperature in frustrated materials; iii) the mapping between gauge and spin degrees of freedom means that the susceptibility at finite wave vector can be used as a local probe of fluctuations among topological sectors.Comment: 10 pages, 5 figure

    Nonequilibrium Phase Diagram of a Driven-Dissipative Many-Body System

    Full text link
    We study the nonequilibrium dynamics of a many-body bosonic system on a lattice, subject to driving and dissipation. The time-evolution is described by a master equation, which we treat within a generalized Gutzwiller mean field approximation for density matrices. The dissipative processes are engineered such that the system, in the absence of interaction between the bosons, is driven into a homogeneous steady state with off-diagonal long range order. We investigate how the coherent interaction affects qualitatively the properties of the steady state of the system and derive a nonequilibrium phase diagram featuring a phase transition into a steady state without long range order. The phase diagram exhibits also an extended domain where an instability of the homogeneous steady state gives rise to a persistent density pattern with spontaneously broken translational symmetry. In the limit of small particle density, we provide a precise analytical description of the time-evolution during the instability. Moreover, we investigate the transient following a quantum quench of the dissipative processes and we elucidate the prominent role played by collective topological variables in this regime.Comment: 23 pages, 15 figure

    Methodological and empirical challenges in modelling residential location choices

    No full text
    The modelling of residential locations is a key element in land use and transport planning. There are significant empirical and methodological challenges inherent in such modelling, however, despite recent advances both in the availability of spatial datasets and in computational and choice modelling techniques. One of the most important of these challenges concerns spatial aggregation. The housing market is characterised by the fact that it offers spatially and functionally heterogeneous products; as a result, if residential alternatives are represented as aggregated spatial units (as in conventional residential location models), the variability of dwelling attributes is lost, which may limit the predictive ability and policy sensitivity of the model. This thesis presents a modelling framework for residential location choice that addresses three key challenges: (i) the development of models at the dwelling-unit level, (ii) the treatment of spatial structure effects in such dwelling-unit level models, and (iii) problems associated with estimation in such modelling frameworks in the absence of disaggregated dwelling unit supply data. The proposed framework is applied to the residential location choice context in London. Another important challenge in the modelling of residential locations is the choice set formation problem. Most models of residential location choices have been developed based on the assumption that households consider all available alternatives when they are making location choices. Due the high search costs associated with the housing market, however, and the limited capacity of households to process information, the validity of this assumption has been an on-going debate among researchers. There have been some attempts in the literature to incorporate the cognitive capacities of households within discrete choice models of residential location: for instance, by modelling households’ choice sets exogenously based on simplifying assumptions regarding their spatial search behaviour (e.g., an anchor-based search strategy) and their characteristics. By undertaking an empirical comparison of alternative models within the context of residential location choice in the Greater London area this thesis investigates the feasibility and practicality of applying deterministic choice set formation approaches to capture the underlying search process of households. The thesis also investigates the uncertainty of choice sets in residential location choice modelling and proposes a simplified probabilistic choice set formation approach to model choice sets and choices simultaneously. The dwelling-level modelling framework proposed in this research is practice-ready and can be used to estimate residential location choice models at the level of dwelling units without requiring independent and disaggregated dwelling supply data. The empirical comparison of alternative exogenous choice set formation approaches provides a guideline for modellers and land use planners to avoid inappropriate choice set formation approaches in practice. Finally, the proposed simplified choice set formation model can be applied to model the behaviour of households in online real estate environments.Open Acces
    corecore