1,082 research outputs found

    An Overview on IEEE 802.11bf: WLAN Sensing

    Full text link
    With recent advancements, the wireless local area network (WLAN) or wireless fidelity (Wi-Fi) technology has been successfully utilized to realize sensing functionalities such as detection, localization, and recognition. However, the WLANs standards are developed mainly for the purpose of communication, and thus may not be able to meet the stringent requirements for emerging sensing applications. To resolve this issue, a new Task Group (TG), namely IEEE 802.11bf, has been established by the IEEE 802.11 working group, with the objective of creating a new amendment to the WLAN standard to meet advanced sensing requirements while minimizing the effect on communications. This paper provides a comprehensive overview on the up-to-date efforts in the IEEE 802.11bf TG. First, we introduce the definition of the 802.11bf amendment and its formation and standardization timeline. Next, we discuss the WLAN sensing use cases with the corresponding key performance indicator (KPI) requirements. After reviewing previous WLAN sensing research based on communication-oriented WLAN standards, we identify their limitations and underscore the practical need for the new sensing-oriented amendment in 802.11bf. Furthermore, we discuss the WLAN sensing framework and procedure used for measurement acquisition, by considering both sensing at sub-7GHz and directional multi-gigabit (DMG) sensing at 60 GHz, respectively, and address their shared features, similarities, and differences. In addition, we present various candidate technical features for IEEE 802.11bf, including waveform/sequence design, feedback types, as well as quantization and compression techniques. We also describe the methodologies and the channel modeling used by the IEEE 802.11bf TG for evaluation. Finally, we discuss the challenges and future research directions to motivate more research endeavors towards this field in details.Comment: 31 pages, 25 figures, this is a significant updated version of arXiv:2207.0485

    S-shaped transition trajectory and dynamic development frontier of the financial systemic risk research: a multiple networks analysis

    Get PDF
    Financial systemic risk has an impact on the real economy and may trigger a chain reaction in the whole economic system leading to the financial crisis. Many scholars focus on financial systemic risk, but few of them are bibliometric analyses. Therefore, this paper explores the status quo, emerging trends, and transition trajectory through the above analysis method in the research field from 1990 to 2020. Based on the above analysis, we find the following conclusions: (1) The basic conclusions of the most productive countries, institutions, journals, authors, status quo, and the change of hotspots in this research field are presented. (2) The emerging trends in this research field are ‘credit risk’, ‘capital shortfall’, ‘spill-over’, ‘spread’, ‘financial market’, ‘interconnectedness’, ‘transmission’ in the last three years. (3) The research field of financial systemic risk presents an S-shaped transition trajectory through the local forward, the local backward, the global standard, and the global key-route main path analysis. (4) We find that the most cited authors are not always at the core of the trajectory of financial systemic risk research. The emerging trend ‘credit risk’ is also recently a core research direction in this research field’s transition trajector

    An Initial Framework Assessing the Safety of Complex Systems

    Get PDF
    Trabajo presentado en la Conference on Complex Systems, celebrada online del 7 al 11 de diciembre de 2020.Atmospheric blocking events, that is large-scale nearly stationary atmospheric pressure patterns, are often associated with extreme weather in the mid-latitudes, such as heat waves and cold spells which have significant consequences on ecosystems, human health and economy. The high impact of blocking events has motivated numerous studies. However, there is not yet a comprehensive theory explaining their onset, maintenance and decay and their numerical prediction remains a challenge. In recent years, a number of studies have successfully employed complex network descriptions of fluid transport to characterize dynamical patterns in geophysical flows. The aim of the current work is to investigate the potential of so called Lagrangian flow networks for the detection and perhaps forecasting of atmospheric blocking events. The network is constructed by associating nodes to regions of the atmosphere and establishing links based on the flux of material between these nodes during a given time interval. One can then use effective tools and metrics developed in the context of graph theory to explore the atmospheric flow properties. In particular, Ser-Giacomi et al. [1] showed how optimal paths in a Lagrangian flow network highlight distinctive circulation patterns associated with atmospheric blocking events. We extend these results by studying the behavior of selected network measures (such as degree, entropy and harmonic closeness centrality)at the onset of and during blocking situations, demonstrating their ability to trace the spatio-temporal characteristics of these events.This research was conducted as part of the CAFE (Climate Advanced Forecasting of sub-seasonal Extremes) Innovative Training Network which has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie SkƂodowska-Curie grant agreement No. 813844

    Measuring named data networks

    Get PDF
    2020 Spring.Includes bibliographical references.Named Data Networking (NDN) is a promising information-centric networking (ICN) Internet architecture that addresses the content directly rather than addressing servers. NDN provides new features, such as content-centric security, stateful forwarding, and in-network caches, to better satisfy the needs of today's applications. After many years of technological research and experimentation, the community has started to explore the deployment path for NDN. One NDN deployment challenge is measurement. Unlike IP, which has a suite of measurement approaches and tools, NDN only has a few achievements. NDN routing and forwarding are based on name prefixes that do not refer to individual endpoints. While rich NDN functionalities facilitate data distribution, they also break the traditional end-to-end probing based measurement methods. In this dissertation, we present our work to investigate NDN measurements and fill some research gaps in the field. Our thesis of this dissertation states that we can capture a substantial amount of useful and actionable measurements of NDN networks from end hosts. We start by comparing IP and NDN to propose a conceptual framework for NDN measurements. We claim that NDN can be seen as a superset of IP. NDN supports similar functionalities provided by IP, but it has unique features to facilitate data retrieval. The framework helps identify that NDN lacks measurements in various aspects. This dissertation focuses on investigating the active measurements from end hosts. We present our studies in two directions to support the thesis statement. We first present the study to leverage the similarities to replicate IP approaches in NDN networks. We show the first work to measure the NDN-DPDK forwarder, a high-speed NDN forwarder designed and implemented by the National Institute of Standards and Technology (NIST), in a real testbed. The results demonstrate that Data payload sizes dominate the forwarding performance, and efficiently using every fragment to improve the goodput. We then present the first work to replicate packet dispersion techniques in NDN networks. Based on the findings in the NDN-DPDK forwarder benchmark, we devise the techniques to measure interarrivals for Data packets. The results show that the techniques successfully estimate the capacity on end hosts when 1Gbps network cards are used. Our measurements also indicate the NDN-DPDK forwarder introduces variance in Data packet interarrivals. We identify the potential bottlenecks and the possible causes of the variance. We then address the NDN specific measurements, measuring the caching state in NDN networks from end hosts. We propose a novel method to extract fingerprints for various caching decision mechanisms. Our simulation results demonstrate that the method can detect caching decisions in a few rounds. We also show that the method is not sensitive to cross-traffic and can be deployed on real topologies for caching policy detection

    Physical mechanisms may be as important as brain mechanisms in evolution of speech [Commentary on Ackerman, Hage, & Ziegler. Brain Mechanisms of acoustic communication in humans and nonhuman primates: an evolutionary perspective]

    No full text
    We present two arguments why physical adaptations for vocalization may be as important as neural adaptations. First, fine control over vocalization is not easy for physical reasons, and modern humans may be exceptional. Second, we present an example of a gorilla that shows rudimentary voluntary control over vocalization, indicating that some neural control is already shared with great apes

    Understanding Network Dynamics in Flooding Emergencies for Urban Resilience

    Get PDF
    Many cities around the world are exposed to extreme flooding events. As a result of rapid population growth and urbanization, cities are also likely to become more vulnerable in the future and subsequently, more disruptions would occur in the face of flooding. Resilience, an ability of strong resistance to and quick recovery from emergencies, has been an emerging and important goal of cities. Uncovering mechanisms of flooding emergencies and developing effective tools to sense, communicate, predict and respond to emergencies is critical to enhancing the resilience of cities. To overcome this challenge, existing studies have attempted to conduct post-disaster surveys, adopt remote sensing technologies, and process news articles in the aftermath of disasters. Despite valuable insights obtained in previous literature, technologies for real-time and predictive situational awareness are still missing. This limitation is mainly due to two barriers. First, existing studies only use conventional data sources, which often suppress the temporal resolution of situational information. Second, models and theories that can capture the real-time situation is limited. To bridge these gaps, I employ human digital trace data from multiple data sources such as Twitter, Nextdoor, and INTRIX. My study focuses on developing models and theories to expand the capacity of cities in real-time and predictive situational awareness using digital trace data. In the first study, I developed a graph-based method to create networks of information, extract critical messages, and map the evolution of infrastructure disruptions in flooding events from Twitter. My second study proposed and tested an online network reticulation theory to understand how humans communicate and spread situational information on social media in response to service disruptions. The third study proposed and tested a network percolation-based contagion model to understand how floodwaters spread over urban road networks and the extent to which we can predict the flooding in the next few hours. In the last study, I developed an adaptable reinforcement learning model to leverage human trace data from normal situations and simulate traffic conditions during the flooding. All proposed methods and theories have significant implications and applications in improving the real-time and predictive situational awareness in flooding emergencies

    Computational approaches to semantic change

    Get PDF
    Semantic change â€” how the meanings of words change over time â€” has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least  understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans. A major challenge presently is to integrate the hard-earned  knowledge and expertise of traditional historical linguistics with  cutting-edge methodology explored primarily in computational linguistics. The idea for the present volume came out of a concrete response to this challenge.  The 1st International Workshop on Computational Approaches to Historical Language Change (LChange'19), at ACL 2019, brought together scholars from both fields. This volume offers a survey of this exciting new direction in the study of semantic change, a discussion of the many remaining challenges that we face in pursuing it, and considerably updated and extended versions of a selection of the contributions to the LChange'19 workshop, addressing both more theoretical problems —  e.g., discovery of "laws of semantic change" â€” and practical applications, such as information retrieval in longitudinal text archives

    Neocerebellar Kalman filter linguistic processor: from grammaticalization to transcranial magnetic stimulation

    Get PDF
    The present work introduces a synthesis of neocerebellar state estimation and feedforward control with multi-level language processing. The approach combines insights from clinical, imaging, and modelling work on the cerebellum with psycholinguistic and historical linguistic research. It finally provides the first experimental attempts towards the empirical validation of this synthesis, employing transcranial magnetic stimulation. A neuroanatomical locus traditionally seen as limited to lower sensorimotor functions, the cerebellum has, over the last decades, emerged as a widely accepted foundation of feedforward control and state estimation. Its cytoarchitectural homogeneity and diverse connectivity with virtually all parts of the central nervous system strongly support the idea of a uniform, domain-general cerebellar computation. Its reciprocal connectivity with language-related cortical areas suggests that this uniform cerebellar computation is also applied in language processing. Insight into the latter, however, remains an elusive desideratum; instead, research on cerebellar language functions is predominantly involved in the frontal cortical-like deficits (e.g. aphasias) seldom induced by cerebellar impairment. At the same time, reflections on cerebellar computations in language processing remain at most speculative, given the lack of discourse between cerebellar neuroscientists and psycholinguists. On the other hand, the fortunate contingency of the recent accommodation of these computations in psycholinguistic models provides the foundations for satisfying the desideratum above. The thesis thus formulates a neurolinguistic model whereby multi-level, predictive, associative linguistic operations are acquired and performed in neocerebello-cortical circuits, and are adaptively combined with cortico-cortical categorical processes. A broad range of psycholinguistic phenomena, involving, among others, "pragmatic normalization", "verbal/semantic illusions", associative priming, and phoneme restoration, are discussed in the light of recent findings on neocerebellar cognitive functions, and provide a rich research agenda for the experimental validation of the proposal. The hypothesis is then taken further, examining grammaticalization changes in the light of neocerebellar linguistic contributions. Despite a) the broad acceptance of routinization and automatization processes as the domain-general core of grammaticalization, b) the growing psycholinguistic research on routinized processing, and c) the evidence on neural circuits involved in automatization processes (crucially involving the cerebellum), interdisciplinary discourse remains strikingly poor. Based on the above, a synthesis is developed, whereby grammaticalization changes are introduced in routinized dialogical interaction as the result of maximized involvement of associative neocerebello-cortical processes. The thesis then turns to the first steps taken towards the verification of the hypothesis at hand. In view of the large methodological limitations of clinical research on cerebellar cognitive functions, the transcranial magnetic stimulation apparatus is employed instead, producing the very first linguistic experiments involving cerebellar stimulation. Despite the considerable technical difficulties met, neocerebellar loci are shown to be selectively involved in formal- and semantic-associative computations, with far-reaching consequences for neurolinguistic models of sentence processing. In particular, stimulation of the neocerebellar vermis is found to selectively enhance formal-associative priming in native speakers of English, and to disrupt, rather selectively, semantic-categorical priming in native speakers of Modern Greek, as well as to disrupt the practice-induced facilitation in processing repeatedly associated letter strings. Finally, stimulation of the right neocerebellar Crus I is found to enhance, quite selectively, semantic-associative priming in native speakers of English, while stimulation of the right neocerebellar vermis is shown to disrupt semantic priming altogether. The results are finally discussed in the light of a future research agenda overcoming the technical limitations met here

    Computational approaches to semantic change

    Get PDF
    Semantic change â€” how the meanings of words change over time â€” has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least  understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans. A major challenge presently is to integrate the hard-earned  knowledge and expertise of traditional historical linguistics with  cutting-edge methodology explored primarily in computational linguistics. The idea for the present volume came out of a concrete response to this challenge.  The 1st International Workshop on Computational Approaches to Historical Language Change (LChange'19), at ACL 2019, brought together scholars from both fields. This volume offers a survey of this exciting new direction in the study of semantic change, a discussion of the many remaining challenges that we face in pursuing it, and considerably updated and extended versions of a selection of the contributions to the LChange'19 workshop, addressing both more theoretical problems —  e.g., discovery of "laws of semantic change" â€” and practical applications, such as information retrieval in longitudinal text archives
    • 

    corecore