74 research outputs found

    Coverage optimization and power reduction in SFN using simulated annealing

    Get PDF
    An approach that predicts the propagation, models the terrestrial receivers and optimizes the performance of single frequency networks (SFN) for digital video broadcasting in terms of the final coverage achieved over any geographical region, enhancing the most populated areas, is proposed in this paper. The effective coverage improvement and thus, the self-interference reduction in the SFN is accomplished by optimizing the internal static delays, sector antenna gain, and both azimuth and elevation orientation for every transmitter within the network using the heuristic simulated annealing (SA) algorithm. Decimation and elevation filtering techniques have been considered and applied to reduce the computational cost of the SA-based approach, including results that demonstrate the improvements achieved. Further representative results for two SFN in different scenarios considering the effect on the final coverage of optimizing any of the transmitter parameters previously outlined or a combination of some of them are reported and discussed in order to show both, the performance of the method and how increasing gradually the complexity of the model for the transmitters leads to more realistic and accurate results.This work was supported by the Spanish Ministry of Science and Innovation under Projects TEC2008-02730 and TEC2012-33321. The work of M. Lanza and Á. L. Gutiérrez was supported by a Pre-Doctoral Grant from the University of Cantabria

    Design of a DVB-T2 simulation platform and network optimization with Simulated Annealing

    Get PDF
    The implementation of the Digital Terrestrial Television is becoming a reality in the Spanish territory. In this context, with the satellite and cable systems, this technology is one of the possible mediums for the television signal transmission. Its development is becoming crucial for the digital transition in those countries which mainly depend on the terrestrial networks for the reception of multimedia contents. However, due to the maturity of the current standard, and also to the higher requirements of the customer needing (HDTV, new contents, etc.), a revision of the current standard becomes necessary. The DVB organisation in collaboration with other entities and organisms has developed a new standard version capable to satisfy those requirements. The main objective of the project is the design and implementation of a physical layer simulation platform for the DVB-T2 standard. This simulator allows the theoretical evaluation of the new enhanced proposals, making easier a later field measurement stage and the future network deployment. The document describes the implementation of the simulation platform as well as its subsequent validation stage, including large graphical results that allow the evaluation and quantification of the improvements introduced over the current standard version (DVB-T). On the other hand, and as future investigation lines, a solution for the future DVB-T2 network deployment is performed, enhancing the coverage capacity of the current network by the use of iterative meta-heuristic techniques. Finally it has to be mentioned that this work has been performed within the context of a project called FURIA, which is a strategic research project funded by the Spanish Ministry of Industry, Tourism and Commerce

    Design of a DVB-T2 simulation platform and network optimization with Simulated Annealing

    Get PDF
    The implementation of the Digital Terrestrial Television is becoming a reality in the Spanish territory. In this context, with the satellite and cable systems, this technology is one of the possible mediums for the television signal transmission. Its development is becoming crucial for the digital transition in those countries which mainly depend on the terrestrial networks for the reception of multimedia contents. However, due to the maturity of the current standard, and also to the higher requirements of the customer needing (HDTV, new contents, etc.), a revision of the current standard becomes necessary. The DVB organisation in collaboration with other entities and organisms has developed a new standard version capable to satisfy those requirements. The main objective of the project is the design and implementation of a physical layer simulation platform for the DVB-T2 standard. This simulator allows the theoretical evaluation of the new enhanced proposals, making easier a later field measurement stage and the future network deployment. The document describes the implementation of the simulation platform as well as its subsequent validation stage, including large graphical results that allow the evaluation and quantification of the improvements introduced over the current standard version (DVB-T). On the other hand, and as future investigation lines, a solution for the future DVB-T2 network deployment is performed, enhancing the coverage capacity of the current network by the use of iterative meta-heuristic techniques. Finally it has to be mentioned that this work has been performed within the context of a project called FURIA, which is a strategic research project funded by the Spanish Ministry of Industry, Tourism and Commerce

    Planning Large Single Frequency Networks for DVB-T2

    Get PDF
    [EN] The final coverage and associated performance of an SFN is a joint result of the properties of all transmitters in the SFN. Due to the large number of parameters involved in the process, finding the right configuration is quite complex. The purpose of the paper is to find optimal SFN network configurations for DVB-T2. Offering more options of system parameters than its predecessor DVB-T, DVB-T2 allows large SFN networks. However, self-interference in SFNs gives rise to restrictions on the maximum inter-transmitter distance and the network size. In order to make optimum use of the spectrum, the same frequency can be reused over different geographical areas - beyond the reuse distance to avoid co-channel interference. In this paper, a methodology based on theoretical network models is proposed. A number of network architectures and network reference models are considered here for different reception modes in order to study the effects of key planning factors on the maximum SFN size and minimum reuse distance. The results show that maximum bitrate, network size and reuse distance are closely related. In addition, it has been found that the guard interval is not the only limiting parameter and that its impact strongly depends on the rest of DVB-T2 mode parameters as well as on the network characteristics (Equivalent Radiated Power, effective height, inter-transmitter distance). Assuming that the C/N requirements are in the vicinity of 20 dB and bitrates over 30 Mbps, it has been found that the network can be as large as 360 x 360 km (delivering 39.2 Mbps) or even 720 x 720 km (delivering 37.5 Mbps). The reuse distance will also have a complex dependency on the DVB-T2 mode and especially the network parameters, ranging from below 100 to 300 km.This work has been financially supported by the Beihang University, IRT, the University of the Basque Country UPV/EHU (UFI 11/30 and program for the specialization of the postdoctoral researcher staff) and by the Spanish Ministry of Economy and Competitiveness under the project HEDYT-GBB (TEC2012-33302)

    COST EFFICIENT PROVISIONING OF MASS MOBILE MULTIMEDIA SERVICES IN HYBRID CELLULAR AND BROADCASTING SYSTEMS

    Full text link
    Uno de los retos a los que se enfrenta la industria de las comunicaciones móviles e inalámbricas es proporcionar servicios multimedia masivos a bajo coste, haciéndolos asequibles para los usuarios y rentables a los operadores. El servicio más representativo es el de TV móvil, el cual se espera que sea una aplicación clave en las futuras redes móviles. Actualmente las redes celulares no pueden soportar un consumo a gran escala de este tipo de servicios, y las nuevas redes de radiodifusión móvil son muy costosas de desplegar debido a la gran inversión en infraestructura de red necesaria para proporcionar niveles aceptables de cobertura. Esta tesis doctoral aborda el problema de la provisión eficiente de servicios multimedia masivos a dispositivos móviles y portables utilizando la infraestructura de radiodifusión y celular existente. La tesis contempla las tecnologías comerciales de última generación para la radiodifusión móvil (DVB-H) y para las redes celulares (redes 3G+ con HSDPA y MBMS), aunque se centra principalmente en DVB-H. El principal paradigma propuesto para proporcionar servicios multimedia masivos a bajo coste es evitar el despliegue de una red DVB-H con alta capacidad y cobertura desde el inicio. En su lugar se propone realizar un despliegue progresivo de la infraestructura DVB-H siguiendo la demanda de los usuarios. Bajo este contexto, la red celular es fundamental para evitar sobre-dimensionar la red DVB-H en capacidad y también en áreas con una baja densidad de usuarios hasta que el despliegue de un transmisor o un repetidor DVB-H sea necesario. Como principal solución tecnológica la tesis propone realizar una codificación multi-burst en DVB-H utilizando códigos Raptor. El objetivo es explotar la diversidad temporal del canal móvil para aumentar la robustez de la señal y, por tanto, el nivel de cobertura, a costa de incrementar la latencia de la red.Gómez Barquero, D. (2009). COST EFFICIENT PROVISIONING OF MASS MOBILE MULTIMEDIA SERVICES IN HYBRID CELLULAR AND BROADCASTING SYSTEMS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/6881Palanci

    An approach for the design of infrastructure mode indoor WLAN based on ray tracing and a binary optimizer

    Get PDF
    This paper presents an approach that combines a ray tracing tool with a binary version of the particle swarm optimization method (BPSO) for the design of infrastructure mode indoor wireless local area networks (WLAN). The approach uses the power levels of a set of candidate access point (AP) locations obtained with the ray tracing tool at a mesh of potential receiver locations or test points to allow the BPSO optimizer to carry out the design of the WLAN. For this purpose, several restrictions are imposed through a fitness function that drives the search towards the selection of a reduced number of AP locations and their channel assignments, keeping at the same time low transmission power levels. During the design, different coverage priority areas can be defined and the signal to interference ratio (SIR) levels are kept as high as possible in order to comply with the Quality of Service (QoS) requirements imposed. The performance of this approach in a real scenario at the author´s premises is reported, showing its usefulness.This work was supported by the Spanish Ministry of Science and Innovation (TEC2008-02730) and the Spanish Ministry of Economy and Competitiveness (TEC2012-33321)

    Integrating complex and diverse spatial datasets: Applications to hydrogeophysics

    Get PDF
    Around the world, groundwater is vital for humankind, yet threatened by anthropogenic deterioration. Sustainable groundwater management is thus crucial and relies on numerical models for adequately forecasting groundwater conditions. A key step in the construction of such models is to determine hydrogeological parameters that describe the behaviour of the water in the aquifer (i.e. water-bearing rock). In recent years, the increasing application of geophysical techniques to characterize the aquifer has improved this endeavour. However, the integration of geophysical data with hydrogeological parameters remains a challenging task. This is due to fundamental differences in coverage and resolution as well as the complex and non-unique interrelation of the measured parameters. Geostatistics have proven to be a useful framework to integrate these spatial datasets. This thesis takes a broader perspective to address this topic, which can be summarized as: “developing computationally efficient and accurate methodologies for the integration of spatial datasets, which are variable in terms of coverage and resolution, and related through complex, site-dependent and/or non-unique relationship”. The contribution presented in this thesis can be partitioned into three stages. The initial stage is concerned with the first part of the problem statement taking a more general and theoretical geostatistical approach. More specifically, it aims to improve the efficiency and accuracy of Sequential Gaussian Simulation (SGS), which is a widely used geostatistical method employed to generate Gaussian fields. It populates a grid by consecutively visiting each node and sampling a value in a local conditional distribution. In the first project, we look at the impact of the type of simulation path, that is, the strategy defining the order in which the nodes are simulated. It is shown that declustering paths, which maximize the distance between consecutively simulated nodes, present the best reproduction of spatial structure. The second project assesses the computational gain and resulting biases of using a constant path for multiple realizations. Results show that these biases are minimal and easily surpassed by the high computational gains, which in turn allow for increasing the neighbourhood size and thus reducing the overall magnitude of biases. In a second stage, an improved version of Bayesian Sequential Simulation (BSS) is proposed. BSS integrates a known secondary variable in the stochastic simulation of a primary variable. The method is based on SGS with the addition that, for each simulated node, the conditional distribution is combined with a distribution coming from the known value of the collocated secondary variable. Our proposition is to generalize this combination by assigning a log-linear weight to each distribution. A key novelty of this work is to design a weighting scheme that adapts its values along the simulation to account for the variation of dependence between both sources of information. Tests are performed for a hydrogeophysical case study consisting of simulating hydraulic conductivity using surface-based electrical resistivity tomography as the secondary variable. This case study shows that the proposed weighting scheme considerably improves the realizations in terms of reproducing the spatial structure while maintaining a good agreement between primary and secondary variables. In the third and final stage, we develop a methodology capable of downscaling tomographic images resulting from smoothness-constrained inversions of geophysical data. The key idea is to use the resolution matrix, computed during the inversion, to quantify the smoothing of the tomogram through a linear mapping. Using area-to-point kriging, it is then possible to simulate fine-scale realizations of electrical conductivity constrained to the tomogram through the previously computed linear mapping. The method developed is able to provide multiple realizations at a relatively low computational cost. These realizations accurately reproduce the spatial structure and the correspondence to the tomogram. -- Bien que vitales pour l’humanité, les eaux souterraines sont menacées à travers le monde par la détérioration anthropique. Leur gestion durable, cruciale pour préserver la ressource en eau, repose en partie sur des modèles numériques permettant de prévoir l’état de l’eau dans les aquifères, c’est-à-dire des roches contenant les eaux souterraines. Une étape clé dans la construction de tels modèles consiste à déterminer les paramètres hydrogéologiques décrivant le comportement de l’eau dans l’aquifère. Au cours des dernières années, l’appli- cation croissante de techniques géophysiques à la caractérisation des aquifères a permis une meilleure description de leurs paramètres hydrogéologiques. Cependant, l’intégration de telles données reste une tâche difficile à cause (1) des différences de couverture et de résolution entre les types de mesures et (2), de l’interaction complexe et non unique des paramètres en question. Dans ce contexte, la géostatistique fournit un cadre utile permettant l’intégration de ces données spatiales. Cette thèse adopte une perspective plus large, qui peut être résumée comme suit : «développer des méthodologies efficaces et précises pour l’intégration de données spatiales, variables en termes de couverture et de résolution, et liées par des relations complexes non-uniques et dépendant du site d’étude. Les contributions principales de ce travail de thèse peuvent être divisées en trois étapes, brièvement résumées ci-dessous. L’étape initiale concerne la première partie de l’énoncé du problème et adopte une approche géostatistique générale et théorique. Elle consiste à améliorer l’efficacité et la précision d’une méthode géostatistique largement utilisée pour générer des champs gaussiens : la simulation gaussienne séquentielle (SGS). Le but de cet algorithme est de remplir une grille en visitant consécutivement chaque nœud et en échantillonnant une valeur dans une distribution conditionnelle locale. Dans un premier temps, nous examinons l’impact du type de chemin utilisé pour réaliser la simulation, c’est-à-dire la stratégie définissant l’ordre dans lequel les nœuds sont simulés. Nous montrons que les chemins dits de “dégroupement”, c’est-à-dire qui maximisent la distance entre les nœuds simulés consécutivement, conduisent à une meilleure reproduction de la structure spatiale dans les résultats de la simulation. Dans un deuxième temps, nous évaluons le gain en temps de calcul et les biais résultants de l’utilisation d’un chemin constant lors de plusieurs réalisations. Les résultats montrent que les biais résultant sont minimaux et facilement surpassés par un gain considérable en temps de calcul. Ceci permet d’augmenter la taille du voisinage utilisé lors de la simulation, et, au final, de réduire l’ampleur globale des biais dans les différentes réalisations. La seconde étape consiste à développer une version améliorée de la simulation séquentielle bayésienne (SSB). Cette méthode de simulation permet d’intégrer une variable secondaire connue dans la simulation stochastique d’une variable primaire. Elle est fondée sur une simulation SGS à laquelle s’ajoute, pour chaque nœud simulé, l’intégration d’une variable secondaire co-localisée. Pour cela, la distribution conditionnelle issue de la simulation SGS est combinée avec une distribution provenant de la valeur connue de la variable secondaire. Notre proposition consiste à généraliser cette combinaison en attribuant un poids log-linéaire à chaque distribution. La nouveauté essentielle consiste alors à concevoir un schéma de pondération qui adapte la valeur des poids au cours de la simulation pour tenir compte de la variation de dépendance entre les deux sources d’information. Pour évaluer les gains obtenus par cette nouvelle approche, des tests sont effectués à partir d’une étude de cas hydrogéophysique consistant à simuler la conductivité hydraulique en utilisant comme source secondaire la tomographie en surface de résistivité électrique. Cette étude de cas montre que le schéma de pondération proposé améliore considérablement la reproduction de la structure spatiale tout en maintenant en accord les variables primaires et secondaires. Enfin, la troisième étape consiste à développer une méthodologie capable d’augmenter la résolution des images tomographiques résultant d’inversions de données géophysiques soumises à des contraintes de lissage. L’idée clé est d’utiliser la matrice de résolution, calculée lors de l’inversion pour quantifier le lissage du tomogramme à travers un mapping linéaire. En utilisant le krigeage “zone-à-point”, il est alors possible de simuler des réalisations à une échelle fine de la conductivité électrique contraintes au tomogramme par le mapping linéaire précédemment calculé. La méthode développée est capable de fournir plusieurs réalisations à un coût de calcul relativement faible. Ces réalisations reproduisent fidèlement la structure spatiale et la correspondance avec le tomogramme
    corecore