6,422 research outputs found

    A Chain-Based Wireless Sensor Network Model Using the Douglas-Peucker Algorithm in the Iot Environment

    Get PDF
    WSNs which are the major component in the IoT mainly use interconnected intelligent wireless sensors. These wireless sensors sense monitor and gather data from their surroundings and then deliver them to users or access connected IoT devices remotely. One of the main issues in WSNs is that sensor nodes are generally powered by batteries, but because of the rugged environments, it is difficult to add energy. The other one may cause an unbalanced energy consumption among sensor nodes due to the uneven distribution of sensors. For these reasons, the death of nodes by the energy exhausting and the performance of the network may rapidly decrease. Hence, an efficient algorithm study for prolonging the network lifetime of WSNs is one of important challenges. In this paper, a chain-based wireless sensor network model is proposed to improve network performance with balanced energy consumption via the solution of the long-distance communication problem. The proposed algorithm is consisted of three phases: Segmentation, Chain Formation, and Data Collection. In segmentation phase, an optimal distance tolerance is determined, and then the network field is divided into small sub-regions according to its value. The chain formation is started from the sub-region far away from the sink, and then extended, and sensed data are collected along a chain and transmitted to a sink. Simulations have been performed to compare with PEGASIS and Enhanced PEGASIS using an OMNET++ simulator. The simulation results from this study showed that the proposed algorithm prolonged the network lifetime via the achievement of the balanced energy consumption compared to PEGASIS and Enhanced PEGASIS. The proposed algorithm can be used in any applications to improve network performance of WSNs

    Whom Do You Want to Be Friends With: An Extroverted or an Introverted Avatar? Impacts of the Uncanny Valley Effect and Conversational Cues

    Get PDF
    With the rapid growth of social virtual reality platforms, an increasing number of people will be interacting with others as avatars in virtual environments. Therefore, it is essential to develop a better understanding of the factors that could impact initial personality assessments and how they affect the willingness of people to befriend one another. Thin-slice judgment constitutes a quick judgment of a personality based on an avatar, and it could be impacted by the avatar’s appearance, particularly if the avatar elicits an uncanny valley effect that brings negative emotions such as eerieness. However, personality judgments and friendship decisions could also be influenced by social cues, such as conversational style. This experimental study investigated how these factors impact willingness to make friends with others in a virtual world. Drawing upon the uncanny valley effect and thin-slice judgment, this study examined how different levels of realism and conversational cues influence trustworthiness, likeability, and the willingness to be a friend. Furthermore, the current study tried to shed light on the interaction effects of realism and conversational cues to the dependent variables. In other words, this study investigated how this eventually influences one’s willingness to be a friend under the thin-slice judgment when personality judgments result from the negative feeling (i.e., eeriness) of the uncanny valley effect and social cues are conflicted. To this end, a 2 (realism: cartoonish vs. hyper-realistic) x 2 (conversational cues: extroverted vs. introverted) between-subjects online experiment was conducted. The results showed that trustworthiness and likeability significantly impacted the willingness to be a friend. Furthermore, realism and conversational cues marginally affected the willingness to be a friend. Keywords: uncanny valley effect, thin-slice judgment, avatar, personality judgment, willingness to be a frien

    How do People Process and Share Fake News on Social Media?: In the context of Dual-Process of Credibility with Partisanship, Cognitive Appraisal to Threat, and Corrective Action

    Get PDF
    The objective of this study is to examine how the information processing of news users happens on social media in the context of spreading fake news. This study is intended to shed light on how fake news spreads on social media with the effects of two moderators (i.e., partisanship and source credibility) from political attitude consistency to message credibility and the effect of mediation (i.e., cognitive appraisal to threat) from message credibility to intent to share fake news on social media and corrective action. As a theoretical lens, dual-process theories were adopted in this paper. For this, a 2 (news topic: Immigration vs. Gun control) X 2 (news topic stance: Positive vs. Negative) X 2 (source: major (i.e., Associated Press) vs. minor (i.e., blog news) between-subject online experiment with 507 participants was conducted for both immigration and gun control topics. As a result, in the moderation effects, although partisanship was significant for both topic immigration and gun control news, source credibility was significant only for immigration news. Plus, the mediation effect of the cognitive appraisal to threats was significant between message credibility and the intent to share fake news on social media for both news topics. Lastly, even though the relations between message credibility and corrective action had to be negatively associated, they were positively correlated

    Contextual Linear Bandits under Noisy Features: Towards Bayesian Oracles

    Full text link
    We study contextual linear bandit problems under uncertainty on features; they are noisy with missing entries. To address the challenges from the noise, we analyze Bayesian oracles given observed noisy features. Our Bayesian analysis finds that the optimal hypothesis can be far from the underlying realizability function, depending on noise characteristics, which is highly non-intuitive and does not occur for classical noiseless setups. This implies that classical approaches cannot guarantee a non-trivial regret bound. We thus propose an algorithm aiming at the Bayesian oracle from observed information under this model, achieving O~(dT)\tilde{O}(d\sqrt{T}) regret bound with respect to feature dimension dd and time horizon TT. We demonstrate the proposed algorithm using synthetic and real-world datasets.Comment: 30 page

    FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization

    Full text link
    Post-training quantization (PTQ) has been gaining popularity for the deployment of deep neural networks on resource-limited devices since unlike quantization-aware training, neither a full training dataset nor end-to-end training is required at all. As PTQ schemes based on reconstructing each layer or block output turn out to be effective to enhance quantized model performance, recent works have developed algorithms to devise and learn a new weight-rounding scheme so as to better reconstruct each layer or block output. In this work, we propose a simple yet effective new weight-rounding mechanism for PTQ, coined FlexRound, based on element-wise division instead of typical element-wise addition such that FlexRound enables jointly learning a common quantization grid size as well as a different scale for each pre-trained weight. Thanks to the reciprocal rule of derivatives induced by element-wise division, FlexRound is inherently able to exploit pre-trained weights when updating their corresponding scales, and thus, flexibly quantize pre-trained weights depending on their magnitudes. We empirically validate the efficacy of FlexRound on a wide range of models and tasks. To the best of our knowledge, our work is the first to carry out comprehensive experiments on not only image classification and natural language understanding but also natural language generation, assuming a per-tensor uniform PTQ setting. Moreover, we demonstrate, for the first time, that large language models can be efficiently quantized, with only a negligible impact on performance compared to half-precision baselines, achieved by reconstructing the output in a block-by-block manner.Comment: Accepted to ICML 202
    corecore