22 research outputs found

    Human activity recognition with commercial WiFi signals

    Get PDF

    A reduced complexity channel estimation algorithm with equalization

    No full text
    The error rate performance of a previously developed reduced complexity channel estimator, known as the generalized least mean squares (GLMS) algorithm, is investigated in conjunction with a minimum-mean-square-error (MMSE) decision feedback equalizer (DFE). The channel estimator is based on the theory of polynomial prediction and Taylor series expansion of the underlying channel model in time domain. It is a simplification of a previously developed generalized recursive least squares (GRLS) estimator, achieved by replacing the online recursive computation of the 'intermediate’ matrix by an offline pre-computed matrix. Similar to the GRLS estimator, it is able to operate in Rayleigh or Rician fading environment without reconfiguration of the state transition matrix to accommodate the non-random mean components. Simulation results show that it is able to offer a trade-off between reduced complexity channel estimation and good system performance

    MIMO Receiver Using Reduced Complexity Sequence Estimation With Channel Estimation and Tracking

    No full text

    Vertical line-based descriptor for omnidirectional view image

    No full text
    Catadioptric omnidirectional view sensor has a convenient 360 degree field of view that favours various robotic applications. The distortion nature in omnidirectional view images allows the discovery of a new robust image feature in the form of vertical/central propagating lines. In this paper, we proposed an improvement to the existing vertical line detection algorithm using Haar wavelet transform under integral image environment. Subsequently, a new lightweight descriptor scheme is developed using the same Haar wavelet response that complies with the nature of line features

    A parallel root-finding method for omnidirectional image unwrapping

    No full text
    The panoramic unwrapping of catadioptric omnidirectional view (COV) sensors have mostly relied on a precomputed mapping look-up table due to an expensive computational load that generally has its bottleneck occur at solving a sextic polynomial. However, this approach causes a limitation to the viewpoint dynamics as runtime modifications to the mapping values are not allowed in the implementation. In this paper, a parallel root-finding technique using Compute Unified Device Architecture (CUDA) platform is proposed. The proposed method enables on-the-fly computation of the mapping look-up table thus facilitate in a real-time viewpoint adjustable panoramic unwrapping. Experimental results showed that the proposed implementation incurred minimum computational load, and performed at 10.3 times and 2.3 times the speed of a current generation central processing unit (CPU) respectively on a single-core and multi-core environment

    Visual detection in omnidirectional view sensors

    No full text

    A closed form unwrapping method for a spherical omnidirectional view sensor

    Get PDF
    Abstract This article proposes a novel method of image unwrapping for spherical omnidirectional images acquired through a non-single viewpoint (NSVP) omnidirectional sensor. It has three key steps i.e. (1) calibrate the camera to obtain parameters describing the spherical omnidirectional sensor, (2) map world points onto mirror points and, subsequently, onto image points, and (3) set up the projection plane for the final image unwrapping. Based on the projection plane selected, the algorithm is able to produce three common forms of unwrapping, namely cylindrical panoramic, cuboid panoramic, and ground plane view using closed form mapping equations. The motivation for developing this technique is to address the complexity in using a NSVP omnidirectional sensor and ultimately encouraging its application in robotics field. One of the main advantages of a NSVP omnidirectional sensor is that the mirror can often be obtained at a lower price as compared to the single viewpoint counterpart.</jats:p
    corecore