300 research outputs found
A performance model of the IEEE 802.11 distributed coordination function under finite load condition
Modeling of the IEEE 802.11 wireless LAN MAC protocol has received considerable research attention recently. Most of the previous works have concerned with the derivation of system throughput for the Distributed Coordination Function (DCF) of the MAC protocol. There have been very few works investigating the delay performance of the DCF scheme. This thesis presents an analytical model of the DCF under un-saturated homogeneous and heterogeneous conditions. Our model allows stations to have either the same or different packet arrival and transmission rates. The arrival of packets is assumed to be according to a Poisson process. We model each station's MAC buffer as an M/G/1 queue and the service time of each station as a three dimensional Markov chain. The dependency between stations is taken into account through the backoff counter freezing and packet collision during transmission. We derive the probability generating function (PGF) of the packet service time distribution and obtain the closed form expression of the mean packet delay by applying the M/G/1 queuing result. We also investigate the impact of different contention window sizes on the delay performance. The accuracy of the model has been verified by extensive simulations
Quantifying Location Privacy In Location-based Services
Mobile devices (e.g., smart phones) are widely used in people's daily lives. When users rely on location-based services in mobile applications, plenty of location records are exposed to the service providers. This causes a severe location privacy threat. The location privacy problem for location-based services in mobile devices has drawn much attention. In 2011, Shokri et al. proposed a location privacy framework that consists of users' background knowledge, location privacy preserving mechanisms (LPPMs), inference attacks, and metrics. After that, many works designed their own location privacy frameworks based on this structure. One problem of the existing works is that most of them use cell-based location privacy frameworks to simplify the computation. This may result in performance results that are different from those of more realistic frameworks. Besides, while many existing works focus on designing new LPPMs, we do not know how different the location information
an adversary can obtain is, when users use different LPPMs. Although some works propose new complementary privacy metrics (e.g., geo-indistinguishability, conditional entropy) to show their LPPMs are better, we have no idea how these metrics are related to the location information an adversary can obtain. In addition, previous works usually assume a strong or weak adversary who has complete background knowledge to construct a user's mobility pro file, or who has no background knowledge about a user, respectively. What the attack results would be like when an adversary has different amounts of background
knowledge, and can also take semantic background knowledge into account, is unknown.
To address these problems, we propose a more realistic location privacy framework, which considers location points instead of cells as inputs. Our framework contains both geographical background knowledge and semantic background knowledge, different LPPMs with or without the geo-indistinguishability property, different inference attacks, and both the average distance error and the success rate metrics. We design several experiments using a Twitter check-in dataset to quantify our location privacy framework from an adversary's point of view. Our experiments show that an adversary only needs to obtain 6% of
background knowledge to infer around 50% of users' actual locations that he can infer when having full background knowledge; the prior probability distribution of an LPPM has much less impact than the background knowledge; an LPPM with the geo-indistinguishability property may not have better performance against different attacks than LPPMs without this property; the semantic information is not as useful as previous work shows. We believe our findings will help users and researchers have a better understanding of our location privacy framework, and also help them choose the appropriate techniques in different cases
Revisiting Grammatical Error Correction Evaluation and Beyond
Pretraining-based (PT-based) automatic evaluation metrics (e.g., BERTScore
and BARTScore) have been widely used in several sentence generation tasks
(e.g., machine translation and text summarization) due to their better
correlation with human judgments over traditional overlap-based methods.
Although PT-based methods have become the de facto standard for training
grammatical error correction (GEC) systems, GEC evaluation still does not
benefit from pretrained knowledge. This paper takes the first step towards
understanding and improving GEC evaluation with pretraining. We first find that
arbitrarily applying PT-based metrics to GEC evaluation brings unsatisfactory
correlation results because of the excessive attention to inessential systems
outputs (e.g., unchanged parts). To alleviate the limitation, we propose a
novel GEC evaluation metric to achieve the best of both worlds, namely PT-M2
which only uses PT-based metrics to score those corrected parts. Experimental
results on the CoNLL14 evaluation task show that PT-M2 significantly
outperforms existing methods, achieving a new state-of-the-art result of 0.949
Pearson correlation. Further analysis reveals that PT-M2 is robust to evaluate
competitive GEC systems. Source code and scripts are freely available at
https://github.com/pygongnlp/PT-M2.Comment: Accepted to EMNLP 202
Maximum Likelihood Estimation of Model Uncertainty in Predicting Soil Nail Loads Using Default and Modified FHWA Simplified Methods
Accuracy evaluation of the default Federal Highway Administration (FHWA) simplified equation for prediction of maximum soil nail loads under working conditions is presented in this study using the maximum likelihood method and a large amount of measured lower and upper bound nail load data reported in the literature. Accuracy was quantitatively expressed as model bias where model bias is defined as the ratio of measured to predicted nail load. The maximum likelihood estimation was carried out assuming normal and lognormal distributions of bias. Analysis outcomes showed that, based on the collected data, the default FHWA simplified nail load equation is satisfactorily accurate on average and the spread in prediction accuracy expressed as the coefficient of variation of bias is about 30%, regardless of the distribution type. Empirical calibrations were proposed to the default FHWA simplified nail load equation for accuracy improvement. The Bayesian Information Criterion was adopted to perform a comparison of suitability between the competing normal and lognormal statistical models that were intended for description of model bias. Example of reliability-based design of soil nail walls against internal pullout limit state of nails is provided in the end to demonstrate the benefit of performing model calibration and using calibrated model for design of soil nails
DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes
Cross-view multi-object tracking aims to link objects between frames and
camera views with substantial overlaps. Although cross-view multi-object
tracking has received increased attention in recent years, existing datasets
still have several issues, including 1) missing real-world scenarios, 2)
lacking diverse scenes, 3) owning a limited number of tracks, 4) comprising
only static cameras, and 5) lacking standard benchmarks, which hinder the
investigation and comparison of cross-view tracking methods. To solve the
aforementioned issues, we introduce DIVOTrack: a new cross-view multi-object
tracking dataset for DIVerse Open scenes with dense tracking pedestrians in
realistic and non-experimental environments. Our DIVOTrack has ten distinct
scenarios and 550 cross-view tracks, surpassing all cross-view multi-object
tracking datasets currently available. Furthermore, we provide a novel baseline
cross-view tracking method with a unified joint detection and cross-view
tracking framework named CrossMOT, which learns object detection, single-view
association, and cross-view matching with an all-in-one embedding model.
Finally, we present a summary of current methodologies and a set of standard
benchmarks with our DIVOTrack to provide a fair comparison and conduct a
comprehensive analysis of current approaches and our proposed CrossMOT. The
dataset and code are available at https://github.com/shengyuhao/DIVOTrack
A New Approach of Waveform Interpretation Applied in Nondestructive Testing of Defects in Rock Bolts Based on Mode Identification
Due to the characteristics of dispersion of guided waves, the waveforms recorded in ultrasonic nondestructive testing (NDT) of rock bolts are complicated to interpret. With a goal to increase the inspection sensitivity and accuracy in NDT of rock bolts, an approach of waveform interpretation based on wave modes identification is developed. The numerical simulation of full rock bolt and rock bolts with grout defect by Finite Element Method (FEM) is applied to illustrate the approach; it is found that the sensitive and low attenuation wave modes exist. Laboratory tests on full rock bolt and rock bolt with grout loss using NDT are conducted to evaluate the efficacy of the approach of waveform interpretation. In addition to that, a parametric study was conducted on rock bolt models with different sectional defect size. Based on the waveform interpretation, the mode-based reflection coefficient R is proposed to evaluate the sensitivity of wave modes to the defect size of sectional area. It is found that the sensitivity of the wave mode does not change with the defect sectional area, and the amplitude depends on the size of the defect
Bandwidth-Hard Functions: Reductions and Lower Bounds
Memory Hard Functions (MHFs) have been proposed as an answer to the growing inequality between the computational speed of general purpose CPUs and Application Specific Integrated Circuits (ASICs).
MHFs have seen widespread applications including password hashing, key stretching and proofs of work.
Several metrics have been proposed to quantify the ``memory hardness\u27\u27 of a function. Cumulative memory complexity (CMC) (Alwen and Serbinenko, STOC 2015) (or amortized Area Time complexity (Alwen et. al., CCS 2017)) attempts to quantify the cost to acquire/build the hardware to evaluate the function --- after normalizing the time it takes to evaluate the function repeatedly at a given rate.
By contrast, bandwidth hardness (Ren and Devadas, TCC 2017) attempts to quantify the energy costs of evaluating this function --- which in turn is largely dominated by the number of cache misses.
Ideally, a good MHF would be both bandwidth hard and have high cumulative memory complexity.
While the cumulative memory complexity of leading MHF candidates is well understood, little is known about the \emph{bandwidth hardness} of many prominent MHF candidates.
Our contributions are as follows:
First, we provide the first reduction proving that, in the parallel random oracle model, the bandwidth hardness of a Data-Independent Memory Hard Function (iMHF) is described by the red-blue pebbling cost of the directed acyclic graph (DAG) associated with that iMHF.
Second, we show that the goals of designing an MHF with high CMC/bandwidth hardness are well aligned. In particular, we prove that any function (data-independent or not) with high CMC also has relatively high bandwidth costs.
Third, we analyze the bandwidth hardness of several prominent iMHF candidates such as Argon2i (Biryukov et. al., 2015), winner of the password hashing competition, aATSample and DRSample (Alwen et. al., CCS 2017) --- the first practical iMHF with essentially asymptotically optimal CMC. We prove that in the parallel random oracle model each iMHFs are maximally bandwidth hard.
Fourth, we analyze the bandwidth hardness of a prominent dMHF called Scrypt. We prove the first unconditional tight lower bound on the energy cost of Scrypt in the parallel random oracle model.
Finally, we show that the problem of finding the minimum cost red-blue pebbling of a directed acyclic graph is NP-hard
SemProtector: A Unified Framework for Semantic Protection in Deep Learning-based Semantic Communication Systems
Recently proliferated semantic communications (SC) aim at effectively
transmitting the semantics conveyed by the source and accurately interpreting
the meaning at the destination. While such a paradigm holds the promise of
making wireless communications more intelligent, it also suffers from severe
semantic security issues, such as eavesdropping, privacy leaking, and spoofing,
due to the open nature of wireless channels and the fragility of neural
modules. Previous works focus more on the robustness of SC via offline
adversarial training of the whole system, while online semantic protection, a
more practical setting in the real world, is still largely under-explored. To
this end, we present SemProtector, a unified framework that aims to secure an
online SC system with three hot-pluggable semantic protection modules.
Specifically, these protection modules are able to encrypt semantics to be
transmitted by an encryption method, mitigate privacy risks from wireless
channels by a perturbation mechanism, and calibrate distorted semantics at the
destination by a semantic signature generation method. Our framework enables an
existing online SC system to dynamically assemble the above three pluggable
modules to meet customized semantic protection requirements, facilitating the
practical deployment in real-world SC systems. Experiments on two public
datasets show the effectiveness of our proposed SemProtector, offering some
insights of how we reach the goal of secrecy, privacy and integrity of an SC
system. Finally, we discuss some future directions for the semantic protection.Comment: Accepted by Communications Magazin
- …