3,428 research outputs found

    AoA-aware Probabilistic Indoor Location Fingerprinting using Channel State Information

    Full text link
    With expeditious development of wireless communications, location fingerprinting (LF) has nurtured considerable indoor location based services (ILBSs) in the field of Internet of Things (IoT). For most pattern-matching based LF solutions, previous works either appeal to the simple received signal strength (RSS), which suffers from dramatic performance degradation due to sophisticated environmental dynamics, or rely on the fine-grained physical layer channel state information (CSI), whose intricate structure leads to an increased computational complexity. Meanwhile, the harsh indoor environment can also breed similar radio signatures among certain predefined reference points (RPs), which may be randomly distributed in the area of interest, thus mightily tampering the location mapping accuracy. To work out these dilemmas, during the offline site survey, we first adopt autoregressive (AR) modeling entropy of CSI amplitude as location fingerprint, which shares the structural simplicity of RSS while reserving the most location-specific statistical channel information. Moreover, an additional angle of arrival (AoA) fingerprint can be accurately retrieved from CSI phase through an enhanced subspace based algorithm, which serves to further eliminate the error-prone RP candidates. In the online phase, by exploiting both CSI amplitude and phase information, a novel bivariate kernel regression scheme is proposed to precisely infer the target's location. Results from extensive indoor experiments validate the superior localization performance of our proposed system over previous approaches

    Robust PCA as Bilinear Decomposition with Outlier-Sparsity Regularization

    Full text link
    Principal component analysis (PCA) is widely used for dimensionality reduction, with well-documented merits in various applications involving high-dimensional data, including computer vision, preference measurement, and bioinformatics. In this context, the fresh look advocated here permeates benefits from variable selection and compressive sampling, to robustify PCA against outliers. A least-trimmed squares estimator of a low-rank bilinear factor analysis model is shown closely related to that obtained from an ℓ0\ell_0-(pseudo)norm-regularized criterion encouraging sparsity in a matrix explicitly modeling the outliers. This connection suggests robust PCA schemes based on convex relaxation, which lead naturally to a family of robust estimators encompassing Huber's optimal M-class as a special case. Outliers are identified by tuning a regularization parameter, which amounts to controlling sparsity of the outlier matrix along the whole robustification path of (group) least-absolute shrinkage and selection operator (Lasso) solutions. Beyond its neat ties to robust statistics, the developed outlier-aware PCA framework is versatile to accommodate novel and scalable algorithms to: i) track the low-rank signal subspace robustly, as new data are acquired in real time; and ii) determine principal components robustly in (possibly) infinite-dimensional feature spaces. Synthetic and real data tests corroborate the effectiveness of the proposed robust PCA schemes, when used to identify aberrant responses in personality assessment surveys, as well as unveil communities in social networks, and intruders from video surveillance data.Comment: 30 pages, submitted to IEEE Transactions on Signal Processin

    System Identification of Constructed Facilities: Challenges and Opportunities Across Hazards

    Get PDF
    The motivation, success and prevalence of full-scale monitoring of constructed buildings vary considerably across the hazard of concern (earthquakes, strong winds, etc.), due in part to various fiscal and life safety motivators. Yet while the challenges of successful deployment and operation of large-scale monitoring initiatives are significant, they are perhaps dwarfed by the challenges of data management, interrogation and ultimately system identification. Practical constraints on everything from sensor density to the availability of measured input has driven the development of a wide array of system identification and damage detection techniques, which in many cases become hazard-specific. In this study, the authors share their experiences in fullscale monitoring of buildings across hazards and the associated challenges of system identification. The study will conclude with a brief agenda for next generation research in the area of system identification of constructed facilities

    BigEAR: Inferring the Ambient and Emotional Correlates from Smartphone-based Acoustic Big Data

    Get PDF
    This paper presents a novel BigEAR big data framework that employs psychological audio processing chain (PAPC) to process smartphone-based acoustic big data collected when the user performs social conversations in naturalistic scenarios. The overarching goal of BigEAR is to identify moods of the wearer from various activities such as laughing, singing, crying, arguing, and sighing. These annotations are based on ground truth relevant for psychologists who intend to monitor/infer the social context of individuals coping with breast cancer. We pursued a case study on couples coping with breast cancer to know how the conversations affect emotional and social well being. In the state-of-the-art methods, psychologists and their team have to hear the audio recordings for making these inferences by subjective evaluations that not only are time-consuming and costly, but also demand manual data coding for thousands of audio files. The BigEAR framework automates the audio analysis. We computed the accuracy of BigEAR with respect to the ground truth obtained from a human rater. Our approach yielded overall average accuracy of 88.76% on real-world data from couples coping with breast cancer.Comment: 6 pages, 10 equations, 1 Table, 5 Figures, IEEE International Workshop on Big Data Analytics for Smart and Connected Health 2016, June 27, 2016, Washington DC, US

    Master of Science

    Get PDF
    thesisAn approach for subspace detection and magnitude estimation of small seismic events is proposed. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalog as ground truth, the detector performance is assessed in terms of verified detections, false positives, and failed detections. Over 95% of the surface coal mine blasts and about 33% of the events from the underground mining district are correctly identified. The number of potential false positives are kept relatively low by requiring detections to simultaneously occur on two stations. Many of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogs. A trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal to noise ratios, and stations at larger distances, which have greater waveform similarity, is observed. The increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, are explored in identifying events that can be described as linear combinations of training events. In this data set, such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods

    Subspace averaging for source enumeration in large arrays

    Get PDF
    Subspace averaging is proposed and examined as a method of enumerating sources in large linear arrays, under conditions of low sample support. The key idea is to exploit shift invariance as a way of extracting many subspaces, which may then be approximated by a single extrinsic average. An automatic order determination rule for this extrinsic average is then the rule for determining the number of sources. Experimental results are presented for cases where the number of array snapshots is roughly half the number of array elements, and sources are well separated with respect to the Rayleigh limit.The work of I. SantamarĂ­a has been partially supported by the Ministerio de EconomĂ­a y Competitividad (MINECO) of Spain, and AEI/FEDER funds of the E.U., under grant TEC2016-75067-C4-4-R (CARMEN). The work of D. RamĂ­rez has been partly supported by Ministerio de EconomĂ­a of Spain under projects: OTOSIS (TEC2013-41718-R) and the COMONSENS Network (TEC2015-69648-REDC), by the Ministerio de EconomĂ­a of Spain jointly with the European Commission (ERDF) under projects ADVENTURE (TEC2015-69868-C2-1-R) and CAIMAN (TEC2017-86921- C2-2-R), and by the Comunidad de Madrid under project CASI-CAM-CM (S2013/ICE-2845). The work of L. L. Scharf was supported by the National Science Foundation (NSF) under grant CCF-1018472

    Automatic Detection of the Number of Raypaths in a Shallow-Water Waveguide

    No full text
    International audienceCorrect identification and tracking of stable raypaths are critical for shallow-water acoustic tomography. Separating raypaths using high-resolution methods has been presented to improve resolution ability based on the prior knowledge of the number of raypaths. It is clear that the precise knowledge of the number of raypaths largely determines the separation performance. Therefore, a noise-whitening exponential fitting test (NWEFT) using short-length samples is proposed in this paper to automatically detect the number of raypaths in a shallow-water waveguide. Two information-theoretic criteria are considered as comparative methods in terms of the capability of correct detection. Their performances are tested with simulation data and real data obtained from a small-scale experiment. The experimental results show that the NWEFT can provide satisfactory detection compared to the two classic information-theoretic criteria--the Akaike information criterion (AIC) and the minimum description length (MDL). MDL is asymptotically consistent while AIC overestimates even if analyzed asymptotically. Compared to these criteria, the proposed method is more suitable for short-length data
    • 

    corecore