3,705 research outputs found

    Zoom: A multi-resolution tasking framework for crowdsourced geo-spatial sensing

    Full text link
    Abstract—As sensor networking technologies continue to de-velop, the notion of adding large-scale mobility into sensor networks is becoming feasible by crowd-sourcing data collection to personal mobile devices. However, tasking such networks at fine granularity becomes problematic because the sensors are heterogeneous, owned by the crowd and not the network operators. In this paper, we present Zoom, a multi-resolution tasking framework for crowdsourced geo-spatial sensor networks. Zoom allows users to define arbitrary sensor groupings over heterogeneous, unstructured and mobile networks and assign different sensing tasks to each group. The key idea is the separation of the task information ( what task a particular sensor should perform) from the task implementation ( code). Zoom consists of (i) a map, an overlay on top of a geographic region, to represent both the sensor groups and the task information, and (ii) adaptive encoding of the map at multiple resolutions and region-of-interest cropping for resource-constrained devices, allowing sensors to zoom in quickly to a specific region to determine their task. Simulation of a realistic traffic application over an area of 1 sq. km with a task map of size 1.5 KB shows that more than 90 % of nodes are tasked correctly. Zoom also outperforms Logical Neighborhoods, the state-of-the-art tasking protocol in task information size for similar tasks. Its encoded map size is always less than 50 % of Logical Neighborhood’s predicate size. I

    Developing a Volume Model Using South NTS-372R Total Station without Tree Felling in a Populus canadensis Moench Plantation in Beijing, China

    Get PDF
    Volume table preparation using the traditional method and a collection model requires the harvest of approximately 200–300 trees of individual species. Although high precision could be achieved using that method, it causes huge damage to the forest. To minimize these losses, in this study, a South NTS-372R total station with a precise angle and distance measurement mode was used to measure 507 trees of Populus canadensis Moench without single tree felling. Moreover, the C# programming language was used in this study and the collected volume data were inserted in the total station. Using this method, a real-time precise measurement of volume could be achieved. After data collection, the optimal binary volume model of Populus canadensis Moench could be obtained through a comparative analysis. It turns out that the Yamamoto model is the optimal binary volume model (also known as two predictor variable model), with 0.9641 as the coefficient of determination (R2) and 0.19 m3 as the standard deviation of estimated value (SEE), which presents a good imitative effect. Moreover, it showed relative stability with the general relative error (TRE) of –0.12% and the mean system error (MSE) of –1.24%. The mean predicted error (MPE) of 1.18% and the mean predicted standard error (MPSE) of 9.25% showed high estimated precision of the average and individual tree volumes. The model has only three parameters, so it is suitable for volume table preparation. Finally, this study will present some new technical methods and means for volume modeling for further application in forestry

    Effects of Topography on Tree Community Structure in a Deciduous Broad-Leaved Forest in North-Central China

    Get PDF
    Topography strongly influences the compositional structure of tree communities and plays a fundamental role in classifying habitats. Here, data of topography and 16 dominant tree species abundance were collected in a fully mapped 25-ha forest plot in the Qinling Mountains of north-central China. Multivariate regression trees (MRT) were used to categorize the habitats, and habitat associations were examined using the torus-translation test. The relative contributions of topographic and spatial variables to the total community structure were also examined by variation partitioning. The results showed the inconsistency in association of species with habitats across life stages with a few exceptions. Topographic variables [a + b] explained 11% and 19% of total variance at adult and juvenile stage, respectively. In contrast, spatial factors alone [c] explained more variation than topographic factors, revealing strong seed dispersal limitation in species composition in the 25-ha forest plot. Thus, the inconsistent associations of species and habitats coupled with high portion of variation of species composition explained by topographic and spatial factors might suggest that niche process and dispersal limitation had potential influences on species assemblage in the deciduous broad-leaved forest in north-central China

    A New Biometric Template Protection using Random Orthonormal Projection and Fuzzy Commitment

    Full text link
    Biometric template protection is one of most essential parts in putting a biometric-based authentication system into practice. There have been many researches proposing different solutions to secure biometric templates of users. They can be categorized into two approaches: feature transformation and biometric cryptosystem. However, no one single template protection approach can satisfy all the requirements of a secure biometric-based authentication system. In this work, we will propose a novel hybrid biometric template protection which takes benefits of both approaches while preventing their limitations. The experiments demonstrate that the performance of the system can be maintained with the support of a new random orthonormal project technique, which reduces the computational complexity while preserving the accuracy. Meanwhile, the security of biometric templates is guaranteed by employing fuzzy commitment protocol.Comment: 11 pages, 6 figures, accepted for IMCOM 201

    DeepFake detection based on high-frequency enhancement network for highly compressed content

    Get PDF
    The DeepFake, which generates synthetic content, has sparked a revolution in the fight against deception and forgery. However, most existing DeepFake detection methods mainly focus on improving detection performance with high-quality data while ignoring low-quality synthetic content that suffers from high compression. To address this issue, we propose a novel High-Frequency Enhancement framework, which leverages a learnable adaptive high-frequency enhancement network to enrich weak high-frequency information in compressed content without uncompressed data supervision. The framework consists of three branches, i.e., the Basic branch with RGB domain, the Local High-Frequency Enhancement branch with Block-wise Discrete Cosine Transform, and the Global High-Frequency Enhancement branch with Multi-level Discrete Wavelet Transform. Among them, the local branch utilizes the Discrete Cosine Transform coefficient and channel attention mechanism to indirectly achieve adaptive frequency-aware multi-spatial attention, while the global branch supplements the high-frequency information by extracting coarse-to-fine multi-scale high-frequency cues and cascade-residual-based multi-level fusion by Discrete Wavelet Transform coefficients. In addition, we design a Two-Stage Cross-Fusion module to effectively integrate all information, thereby greatly enhancing weak high-frequency information in low-quality data. Experimental results on FaceForensics++, Celeb-DF, and OpenForensics datasets show that the proposed method outperforms the existing state-of-the-art methods and can effectively improve the detection performance of DeepFakes, especially on low-quality data. The code is available here
    • …
    corecore