30 research outputs found
Low Complexity Scheduling and Coding for Wireless Networks
The advent of wireless communication technologies has created a paradigm shift in the accessibility of communication. With it has come an increased demand for throughput, a trend that is likely to increase further in the future. A key aspect of these challenges is to develop low complexity algorithms and architectures that can take advantage of the nature of the wireless medium like broadcasting and physical layer cooperation. In this thesis, we consider several problems in the domain of low complexity coding, relaying and scheduling for wireless networks. We formulate the Pliable Index Coding problem that models a server trying to send one or more new messages over a noiseless broadcast channel to a set of clients that already have a subset of messages as side information. We show through theoretical bounds and algorithms, that it is possible to design short length codes, poly-logarithmic in the number of clients, to solve this problem. The length of the codes are exponentially better than those possible in a traditional index coding setup. Next, we consider several aspects of low complexity relaying in half-duplex diamond networks. In such networks, the source transmits information to the destination through half-duplex intermediate relays arranged in a single layer. The half-duplex nature of the relays implies that they can either be in a listening or transmitting state at any point of time. To achieve high rates, there is an additional complexity of optimizing the schedule (i.e. the relative time fractions) of the relaying states, which can be in number. Using approximate capacity expressions derived from the quantize-map-forward scheme for physical layer cooperation, we show that for networks with relays, the optimal schedule has atmost active states. This is an exponential improvement over the possible active states in a schedule. We also show that it is possible to achieve at least half the capacity of such networks (approximately) by employing simple routing strategies that use only two relays and two scheduling states. These results imply that the complexity of relaying in half-duplex diamond networks can be significantly reduced by using fewer scheduling states or fewer relays without adversely affecting throughput. Both these results assume centralized processing of the channel state information of all the relays. We take the first steps in analyzing the performance of relaying schemes where each relay switches between listening and transmitting states randomly and optimizes their relative fractions using only local channel state information. We show that even with such simple scheduling, we can achieve a significant fraction of the capacity of the network. Next, we look at the dual problem of selecting the subset of relays of a given size that has the highest capacity for a general layered full-duplex relay network. We formulate this as an optimization problem and derive efficient approximation algorithms to solve them. We end the thesis with the design and implementation of a practical relaying scheme called QUILT. In it the relay opportunistically decodes or quantizes its received signal and transmits the resulting sequence in cooperation with the source. To keep the complexity of the system low, we use LDPC codes at the source, interleaving at the relays and belief propagation decoding at the destination. We evaluate our system through testbed experiments over WiFi
Breaking BERT: Evaluating and Optimizing Sparsified Attention
Transformers allow attention between all pairs of tokens, but there is reason
to believe that most of these connections - and their quadratic time and memory
- may not be necessary. But which ones? We evaluate the impact of
sparsification patterns with a series of ablation experiments. First, we
compare masks based on syntax, lexical similarity, and token position to random
connections, and measure which patterns reduce performance the least. We find
that on three common finetuning tasks even using attention that is at least 78%
sparse can have little effect on performance if applied at later transformer
layers, but that applying sparsity throughout the network reduces performance
significantly. Second, we vary the degree of sparsity for three patterns
supported by previous work, and find that connections to neighbouring tokens
are the most significant. Finally, we treat sparsity as an optimizable
parameter, and present an algorithm to learn degrees of neighboring connections
that gives a fine-grained control over the accuracy-sparsity trade-off while
approaching the performance of existing methods.Comment: Shorter version accepted to SNN2021 worksho
Small but Mighty: New Benchmarks for Split and Rephrase
Split and Rephrase is a text simplification task of rewriting a complex
sentence into simpler ones. As a relatively new task, it is paramount to ensure
the soundness of its evaluation benchmark and metric. We find that the widely
used benchmark dataset universally contains easily exploitable syntactic cues
caused by its automatic generation process. Taking advantage of such cues, we
show that even a simple rule-based model can perform on par with the
state-of-the-art model. To remedy such limitations, we collect and release two
crowdsourced benchmark datasets. We not only make sure that they contain
significantly more diverse syntax, but also carefully control for their quality
according to a well-defined set of criteria. While no satisfactory automatic
metric exists, we apply fine-grained manual evaluation based on these criteria
using crowdsourcing, showing that our datasets better represent the task and
are significantly more challenging for the models.Comment: In EMNLP 202
Assessment of Construct Validity of Mishra and Mishra’s Trust Scale in the Context of Merger and Acquisition in India
[[abstract]]The role of trust in merger and acquisition has increasingly been recognized by scholars and practitioners. However, empirical research in this area has been faced with different conceptualization of trust construct, inadequate dimensions and a lack of validated trust scale. This limitation is addressed in this paper by theoretically and empirically validating Mishra and Mishra’s (1994) trust scale in the context of merger and acquisition in India. This scale used four trust dimensions, namely; openness,
competency, caring and reliability. The items of the scale proved to be content valid among 25 subject matter experts. Additionally, in a sample of 100 respondents of key acquired employees, the scale
exhibited adequate levels of reliability, convergent validity, discriminate validity and nomological validity