291,177 research outputs found

    Selective AP-sequence Based Indoor Localization without Site Survey

    Full text link
    In this paper, we propose an indoor localization system employing ordered sequence of access points (APs) based on received signal strength (RSS). Unlike existing indoor localization systems, our approach does not require any time-consuming and laborious site survey phase to characterize the radio signals in the environment. To be precise, we construct the fingerprint map by cutting the layouts of the interested area into regions with only the knowledge of positions of APs. This can be done offline within a second and has a potential for practical use. The localization is then achieved by matching the ordered AP-sequence to the ones in the fingerprint map. Different from traditional fingerprinting that employing all APs information, we use only selected APs to perform localization, due to the fact that, without site survey, the possibility in obtaining the correct AP sequence is lower if it involves more APs. Experimental results show that, the proposed system achieves localization accuracy < 5m with an accumulative density function (CDF) of 50% to 60% depending on the density of APs. Furthermore, we observe that, using all APs for localization might not achieve the best localization accuracy, e.g. in our case, 4 APs out of total 7 APs achieves the best performance. In practice, the number of APs used to perform localization should be a design parameter based on the placement of APs.Comment: VTC2016-Spring, 15-18 May 2016, Nanjing, Chin

    Creation of regions for dialect features using a cellular automaton

    Get PDF
    An issue in dialect research has been how to make generalizations from survey data about where some dialect feature might be found. Pre-computational methods included drawing isoglosses or using shadings to indicate areas where an analyst expected a feature to be found. The use of computers allowed for faster plotting of locations where any given feature had been e¬licited, and also allowed for the use of statistical techniques from technical geography to estimate regions where particular features might be found. However, using the computer did not make the analysis less subjective than isoglosses, and statistical methods from technical geography have turned out to be limited in use. We have prepared a cellular automaton (CA) for use with data collected for the Linguistic Atlas Project that can address the problems involved in this type of data visualization. The CA plots the locations where survey data was elicited, and then through the application of rules creates an estimate of the spatial distributions of selected features. The application of simple rules allows the CA to create objective and reproducible estimates based on the data it was given, without the use of statistical methods

    Automatic Processing of High-Rate, High-Density Multibeam Echosounder Data

    Get PDF
    Multibeam echosounders (MBES) are currently the best way to determine the bathymetry of large regions of the seabed with high accuracy. They are becoming the standard instrument for hydrographic surveying and are also used in geological studies, mineral exploration and scientific investigation of the earth\u27s crustal deformations and life cycle. The significantly increased data density provided by an MBES has significant advantages in accurately delineating the morphology of the seabed, but comes with the attendant disadvantage of having to handle and process a much greater volume of data. Current data processing approaches typically involve (computer aided) human inspection of all data, with time-consuming and subjective assessment of all data points. As data rates increase with each new generation of instrument and required turn-around times decrease, manual approaches become unwieldy and automatic methods of processing essential. We propose a new method for automatically processing MBES data that attempts to address concerns of efficiency, objectivity, robustness and accuracy. The method attributes each sounding with an estimate of vertical and horizontal error, and then uses a model of information propagation to transfer information about the depth from each sounding to its local neighborhood. Embedded in the survey area are estimation nodes that aim to determine the true depth at an absolutely defined location, along with its associated uncertainty. As soon as soundings are made available, the nodes independently assimilate propagated information to form depth hypotheses which are then tracked and updated on-line as more data is gathered. Consequently, we can extract at any time a “current-best” estimate for all nodes, plus co-located uncertainties and other metrics. The method can assimilate data from multiple surveys, multiple instruments or repeated passes of the same instrument in real-time as data is being gathered. The data assimilation scheme is sufficiently robust to deal with typical survey echosounder errors. Robustness is improved by pre-conditioning the data, and allowing the depth model to be incrementally defined. A model monitoring scheme ensures that inconsistent data are maintained as separate but internally consistent depth hypotheses. A disambiguation of these competing hypotheses is only carried out when required by the user. The algorithm has a low memory footprint, runs faster than data can currently be gathered, and is suitable for real-time use. We call this algorithm CUBE (Combined Uncertainty and Bathymetry Estimator). We illustrate CUBE on two data sets gathered in shallow water with different instruments and for different purposes. We show that the algorithm is robust to even gross failure modes, and reliably processes the vast majority of the data. In both cases, we confirm that the estimates made by CUBE are statistically similar to those generated by hand

    Methodological and empirical challenges in modelling residential location choices

    No full text
    The modelling of residential locations is a key element in land use and transport planning. There are significant empirical and methodological challenges inherent in such modelling, however, despite recent advances both in the availability of spatial datasets and in computational and choice modelling techniques. One of the most important of these challenges concerns spatial aggregation. The housing market is characterised by the fact that it offers spatially and functionally heterogeneous products; as a result, if residential alternatives are represented as aggregated spatial units (as in conventional residential location models), the variability of dwelling attributes is lost, which may limit the predictive ability and policy sensitivity of the model. This thesis presents a modelling framework for residential location choice that addresses three key challenges: (i) the development of models at the dwelling-unit level, (ii) the treatment of spatial structure effects in such dwelling-unit level models, and (iii) problems associated with estimation in such modelling frameworks in the absence of disaggregated dwelling unit supply data. The proposed framework is applied to the residential location choice context in London. Another important challenge in the modelling of residential locations is the choice set formation problem. Most models of residential location choices have been developed based on the assumption that households consider all available alternatives when they are making location choices. Due the high search costs associated with the housing market, however, and the limited capacity of households to process information, the validity of this assumption has been an on-going debate among researchers. There have been some attempts in the literature to incorporate the cognitive capacities of households within discrete choice models of residential location: for instance, by modelling households’ choice sets exogenously based on simplifying assumptions regarding their spatial search behaviour (e.g., an anchor-based search strategy) and their characteristics. By undertaking an empirical comparison of alternative models within the context of residential location choice in the Greater London area this thesis investigates the feasibility and practicality of applying deterministic choice set formation approaches to capture the underlying search process of households. The thesis also investigates the uncertainty of choice sets in residential location choice modelling and proposes a simplified probabilistic choice set formation approach to model choice sets and choices simultaneously. The dwelling-level modelling framework proposed in this research is practice-ready and can be used to estimate residential location choice models at the level of dwelling units without requiring independent and disaggregated dwelling supply data. The empirical comparison of alternative exogenous choice set formation approaches provides a guideline for modellers and land use planners to avoid inappropriate choice set formation approaches in practice. Finally, the proposed simplified choice set formation model can be applied to model the behaviour of households in online real estate environments.Open Acces

    Exit polling and racial bloc voting: Combining individual-level and R×\timesC ecological data

    Full text link
    Despite its shortcomings, cross-level or ecological inference remains a necessary part of some areas of quantitative inference, including in United States voting rights litigation. Ecological inference suffers from a lack of identification that, most agree, is best addressed by incorporating individual-level data into the model. In this paper we test the limits of such an incorporation by attempting it in the context of drawing inferences about racial voting patterns using a combination of an exit poll and precinct-level ecological data; accurate information about racial voting patterns is needed to assess triggers in voting rights laws that can determine the composition of United States legislative bodies. Specifically, we extend and study a hybrid model that addresses two-way tables of arbitrary dimension. We apply the hybrid model to an exit poll we administered in the City of Boston in 2008. Using the resulting data as well as simulation, we compare the performance of a pure ecological estimator, pure survey estimators using various sampling schemes and our hybrid. We conclude that the hybrid estimator offers substantial benefits by enabling substantive inferences about voting patterns not practicably available without its use.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS353 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Error Estimation of Bathymetric Grid Models Derived from Historic and Contemporary Data Sets

    Get PDF
    The past century has seen remarkable advances in technologies associated with positioning and the measurement of depth. Lead lines have given way to single beam echo sounders, which in turn are being replaced by multibeam sonars and other means of remotely and rapidly collecting dense bathymetric datasets. Sextants were replaced by radio navigation, then transit satellite, GPS and now differential GPS. With each new advance comes tremendous improvement in the accuracy and resolution of the data we collect. Given these changes and given the vastness of the ocean areas we must map, the charts we produce are mainly compilations of multiple data sets collected over many years and representing a range of technologies. Yet despite our knowledge that the accuracy of the various technologies differs, our compilations have traditionally treated each sounding with equal weight. We address these issues in the context of generating regularly spaced grids containing bathymetric values. Gridded products are required for a number of earth sciences studies and for generating the grid we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. Traditionally the hydrographic community has considered each sounding equally accurate and there has been no error evaluation of the bathymetric end product. This has important implications for use of the gridded bathymetry, especially when it is used for generating further scientific interpretations. In this paper we approach the problem of assessing the confidence of the final bathymetry gridded product via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata, to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are next re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard deviation estimates. Finally, we repeat the entire random estimation process and analyze each run’s standard deviation grids in order to examine sampling bias and standard error in the predictions. The final products of the estimation are a collection of standard deviation grids, which we combine with the source data density in order to create a grid that contains information about the bathymetric model’s reliability
    corecore