8 research outputs found

    Application-aware Cognitive Multi-hop Wireless Networking Testbed and Experiments

    Get PDF
    In this thesis, we present a new architecture for application-aware cognitive multihop wireless networks (AC-MWN) with testbed implementations and experiments. Cognitive radio is a technique to adaptively use the spectrum so that the resource can be used more efficiently in a low cost way. Multihop wireless networks can be deployed quickly and flexibly without a fixed infrastructure. In presented new architecture, we study backbone routing schemes with network cognition, routing scheme with network coding and spectrum adaptation. A testbed is implemented to test the schemes for AC-MWN. In addition to basic measurements, we implement a video streaming application based on the AC-MWN architecture using cognitive radios. The Testbed consists of three cognitive radios and three Linux laptops equipped with GNU Radio and GStreamer, open source software development toolkit and multimedia framework respectively. Resulting experiments include a range from basic half duplex data to full duplex voice communications and audio/video streaming with spectrum sensing. This testbed is a foundation for a scalable multipurpose testbed that can be used to test such networks as AC-MWN, adhoc, MANET, VANET, and wireless sensor networks. Experiment results demonstrate that the AC-MWN is applicable and valuable for future low-cost and flexible communication networks. Adviser: Yi Qia

    Application-aware Cognitive Multi-hop Wireless Networking Testbed and Experiments

    Get PDF
    In this thesis, we present a new architecture for application-aware cognitive multihop wireless networks (AC-MWN) with testbed implementations and experiments. Cognitive radio is a technique to adaptively use the spectrum so that the resource can be used more efficiently in a low cost way. Multihop wireless networks can be deployed quickly and flexibly without a fixed infrastructure. In presented new architecture, we study backbone routing schemes with network cognition, routing scheme with network coding and spectrum adaptation. A testbed is implemented to test the schemes for AC-MWN. In addition to basic measurements, we implement a video streaming application based on the AC-MWN architecture using cognitive radios. The Testbed consists of three cognitive radios and three Linux laptops equipped with GNU Radio and GStreamer, open source software development toolkit and multimedia framework respectively. Resulting experiments include a range from basic half duplex data to full duplex voice communications and audio/video streaming with spectrum sensing. This testbed is a foundation for a scalable multipurpose testbed that can be used to test such networks as AC-MWN, adhoc, MANET, VANET, and wireless sensor networks. Experiment results demonstrate that the AC-MWN is applicable and valuable for future low-cost and flexible communication networks. Adviser: Yi Qia

    Heuristics of human enhancement risk: a little chemical help?

    No full text

    Coronal Heating as Determined by the Solar Flare Frequency Distribution Obtained by Aggregating Case Studies

    Full text link
    Flare frequency distributions represent a key approach to addressing one of the largest problems in solar and stellar physics: determining the mechanism that counter-intuitively heats coronae to temperatures that are orders of magnitude hotter than the corresponding photospheres. It is widely accepted that the magnetic field is responsible for the heating, but there are two competing mechanisms that could explain it: nanoflares or Alfv\'en waves. To date, neither can be directly observed. Nanoflares are, by definition, extremely small, but their aggregate energy release could represent a substantial heating mechanism, presuming they are sufficiently abundant. One way to test this presumption is via the flare frequency distribution, which describes how often flares of various energies occur. If the slope of the power law fitting the flare frequency distribution is above a critical threshold, α=2\alpha=2 as established in prior literature, then there should be a sufficient abundance of nanoflares to explain coronal heating. We performed >>600 case studies of solar flares, made possible by an unprecedented number of data analysts via three semesters of an undergraduate physics laboratory course. This allowed us to include two crucial, but nontrivial, analysis methods: pre-flare baseline subtraction and computation of the flare energy, which requires determining flare start and stop times. We aggregated the results of these analyses into a statistical study to determine that α=1.63±0.03\alpha = 1.63 \pm 0.03. This is below the critical threshold, suggesting that Alfv\'en waves are an important driver of coronal heating.Comment: 1,002 authors, 14 pages, 4 figures, 3 tables, published by The Astrophysical Journal on 2023-05-09, volume 948, page 7

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore