977 research outputs found

    Complex systems methods characterizing nonlinear processes in the near-Earth electromagnetic environment: recent advances and open challenges

    Get PDF
    Learning from successful applications of methods originating in statistical mechanics, complex systems science, or information theory in one scientific field (e.g., atmospheric physics or climatology) can provide important insights or conceptual ideas for other areas (e.g., space sciences) or even stimulate new research questions and approaches. For instance, quantification and attribution of dynamical complexity in output time series of nonlinear dynamical systems is a key challenge across scientific disciplines. Especially in the field of space physics, an early and accurate detection of characteristic dissimilarity between normal and abnormal states (e.g., pre-storm activity vs. magnetic storms) has the potential to vastly improve space weather diagnosis and, consequently, the mitigation of space weather hazards. This review provides a systematic overview on existing nonlinear dynamical systems-based methodologies along with key results of their previous applications in a space physics context, which particularly illustrates how complementary modern complex systems approaches have recently shaped our understanding of nonlinear magnetospheric variability. The rising number of corresponding studies demonstrates that the multiplicity of nonlinear time series analysis methods developed during the last decades offers great potentials for uncovering relevant yet complex processes interlinking different geospace subsystems, variables and spatiotemporal scales

    Investigating Letter Identification for Visual Acuity Measurements in the Paracentral Visual Field

    Get PDF
    Publications and conferences posters: Chapter two: - Refereed Conference Publications Barhoom, H., Joshi, M. R., & Schmidtmann, G. (2020). The effect of response biases on resolution thresholds of Sloan letters in central and paracentral vision. The British Congress of Optometry and Vision Science (BCOVS) 2020 (Talk) - Refereed Journal Publications Barhoom, H., Joshi, M. R., & Schmidtmann, G. (2021). The effect of response biases on resolution thresholds of Sloan letters in central and paracentral vision. Vision Research, 187, 110–119. (https://doi.org/10.1016/j.visres.2021.06.002) Chapter three: - Refereed Conference Publications Barhoom, H., Georgeson, M. A., Joshi, M. R., Artes, P. H., Schmidtmann, G. The role of bias, sensitivity and similarity in letter identification task: a noisy template model. AVA 2023 Barhoom, H., Schmidtmann, G., Joshi, M. R., Artes, P. H., & Georgeson, M. A. (2022). The role of similarity and bias in letter acuity measurements: a noisy template model. ECVP 2022 (to be published in Perception) Barhoom, H., Schmidtmann, G., Joshi, M. R., Artes, P. H., & Georgeson, M. A. (2021). The role of bias in a letter acuity identification task: a noisy template model. ECVP 2021 Perception Vol. 50, No. 1 SUPPL, pp. 83-83 - Refereed Journal Publications Georgeson, M. A., Barhoom, H., Joshi, M. R., Artes, P. H., & Schmidtmann, G. (2023). Revealing the influence of bias in a letter acuity identification task: A noisy template model. Vision Research, 208, 108-233 (https://doi.org/10.1016/j.visres.2023.108233). Chapter four: - Refereed Conference Publications Barhoom, H., Artes, P. H., Joshi, M. R., & Schmidtmann, G. (2023). Acuity perimetry with speech input for mapping macular visual field in Glaucoma. The Association for Research in Vision and Ophthalmology (ARVO 2023) (Poster presentation)Sloan letters are commonly used optotypes in clinical practice, but they exhibit different relative legibility, which may be attributed to response bias, sensitivity differences, and letter similarity. In this thesis, we employed Luce’s choice model and developed a new noisy template model to investigate the role of response bias, sensitivity differences, and letter similarity in letter identification of Sloan letters at central and paracentral locations. Results show that the best model was the one that accounted for the effects of bias, sensitivity, and similarity, with bias contributing more than sensitivity and similarity. However, when estimating the letter acuity from the pooled data across all letters, no significant effects of bias, sensitivity, or similarity were observed. The models incorporating similarity demonstrated a substantial increase in the spread of the underlying psychometric function (the percent correct as a function of letter size), particularly in the periphery and upper portion of the function. Given that most letter stimuli in clinical vision tests are presented at supra-threshold sizes, it is plausible to attribute any increase in test-retest variability, particularly in peripheral vision, to similarity alone. Furthermore, this thesis explored the use of letters as stimuli and speech as a response method to assess macular visual sensitivity in healthy observers and individuals with 6 Glaucoma. Dissimilar letters following Sloan’s design were used to estimate peripheral letter acuity within 4 degrees of fixation. A speech recognition algorithm was employed to enable participants to perform the task without supervision. The participants’ perceived task difficulty was assessed through a questionnaire. Results from this experiment show that in observers with Glaucoma, letter acuity exhibited a close correlation with conventional perimetry, and most observers found the task easy to perform. These results demonstrate that letter acuity perimetry with speech input is a viable method for capturing macular damage in Glaucoma. These approaches have the potential to facilitate more intuitive and patient-friendly tests for macular visual field assessments

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks

    Bitstream-based video quality modeling and analysis of HTTP-based adaptive streaming

    Get PDF
    Die Verbreitung erschwinglicher Videoaufnahmetechnologie und verbesserte Internetbandbreiten ermöglichen das Streaming von hochwertigen Videos (Auflösungen > 1080p, Bildwiederholraten ≥ 60fps) online. HTTP-basiertes adaptives Streaming ist die bevorzugte Methode zum Streamen von Videos, bei der Videoparameter an die verfügbare Bandbreite angepasst wird, was sich auf die Videoqualität auswirkt. Adaptives Streaming reduziert Videowiedergabeunterbrechnungen aufgrund geringer Netzwerkbandbreite, wirken sich jedoch auf die wahrgenommene Qualität aus, weswegen eine systematische Bewertung dieser notwendig ist. Diese Bewertung erfolgt üblicherweise für kurze Abschnitte von wenige Sekunden und während einer Sitzung (bis zu mehreren Minuten). Diese Arbeit untersucht beide Aspekte mithilfe perzeptiver und instrumenteller Methoden. Die perzeptive Bewertung der kurzfristigen Videoqualität umfasst eine Reihe von Labortests, die in frei verfügbaren Datensätzen publiziert wurden. Die Qualität von längeren Sitzungen wurde in Labortests mit menschlichen Betrachtern bewertet, die reale Betrachtungsszenarien simulieren. Die Methodik wurde zusätzlich außerhalb des Labors für die Bewertung der kurzfristigen Videoqualität und der Gesamtqualität untersucht, um alternative Ansätze für die perzeptive Qualitätsbewertung zu erforschen. Die instrumentelle Qualitätsevaluierung wurde anhand von bitstrom- und hybriden pixelbasierten Videoqualitätsmodellen durchgeführt, die im Zuge dieser Arbeit entwickelt wurden. Dazu wurde die Modellreihe AVQBits entwickelt, die auf den Labortestergebnissen basieren. Es wurden vier verschiedene Modellvarianten von AVQBits mit verschiedenen Inputinformationen erstellt: Mode 3, Mode 1, Mode 0 und Hybrid Mode 0. Die Modellvarianten wurden untersucht und schneiden besser oder gleichwertig zu anderen aktuellen Modellen ab. Diese Modelle wurden auch auf 360°- und Gaming-Videos, HFR-Inhalte und Bilder angewendet. Darüber hinaus wird ein Langzeitintegrationsmodell (1 - 5 Minuten) auf der Grundlage des ITU-T-P.1203.3-Modells präsentiert, das die verschiedenen Varianten von AVQBits mit sekündigen Qualitätswerten als Videoqualitätskomponente des vorgeschlagenen Langzeitintegrationsmodells verwendet. Alle AVQBits-Varianten, das Langzeitintegrationsmodul und die perzeptiven Testdaten wurden frei zugänglich gemacht, um weitere Forschung zu ermöglichen.The pervasion of affordable capture technology and increased internet bandwidth allows high-quality videos (resolutions > 1080p, framerates ≥ 60fps) to be streamed online. HTTP-based adaptive streaming is the preferred method for streaming videos, adjusting video quality based on available bandwidth. Although adaptive streaming reduces the occurrences of video playout being stopped (called “stalling”) due to narrow network bandwidth, the automatic adaptation has an impact on the quality perceived by the user, which results in the need to systematically assess the perceived quality. Such an evaluation is usually done on a short-term (few seconds) and overall session basis (up to several minutes). In this thesis, both these aspects are assessed using subjective and instrumental methods. The subjective assessment of short-term video quality consists of a series of lab-based video quality tests that have resulted in publicly available datasets. The overall integral quality was subjectively assessed in lab tests with human viewers mimicking a real-life viewing scenario. In addition to the lab tests, the out-of-the-lab test method was investigated for both short-term video quality and overall session quality assessment to explore the possibility of alternative approaches for subjective quality assessment. The instrumental method of quality evaluation was addressed in terms of bitstream- and hybrid pixel-based video quality models developed as part of this thesis. For this, a family of models, namely AVQBits has been conceived using the results of the lab tests as ground truth. Based on the available input information, four different instances of AVQBits, that is, a Mode 3, a Mode 1, a Mode 0, and a Hybrid Mode 0 model are presented. The model instances have been evaluated and they perform better or on par with other state-of-the-art models. These models have further been applied to 360° and gaming videos, HFR content, and images. Also, a long-term integration (1 - 5 mins) model based on the ITU-T P.1203.3 model is presented. In this work, the different instances of AVQBits with the per-1-sec scores output are employed as the video quality component of the proposed long-term integration model. All AVQBits variants as well as the long-term integration module and the subjective test data are made publicly available for further research

    Flexible Hardware-based Security-aware Mechanisms and Architectures

    Get PDF
    For decades, software security has been the primary focus in securing our computing platforms. Hardware was always assumed trusted, and inherently served as the foundation, and thus the root of trust, of our systems. This has been further leveraged in developing hardware-based dedicated security extensions and architectures to protect software from attacks exploiting software vulnerabilities such as memory corruption. However, the recent outbreak of microarchitectural attacks has shaken these long-established trust assumptions in hardware entirely, thereby threatening the security of all of our computing platforms and bringing hardware and microarchitectural security under scrutiny. These attacks have undeniably revealed the grave consequences of hardware/microarchitecture security flaws to the entire platform security, and how they can even subvert the security guarantees promised by dedicated security architectures. Furthermore, they shed light on the sophisticated challenges particular to hardware/microarchitectural security; it is more critical (and more challenging) to extensively analyze the hardware for security flaws prior to production, since hardware, unlike software, cannot be patched/updated once fabricated. Hardware cannot reliably serve as the root of trust anymore, unless we develop and adopt new design paradigms where security is proactively addressed and scrutinized across the full stack of our computing platforms, at all hardware design and implementation layers. Furthermore, novel flexible security-aware design mechanisms are required to be incorporated in processor microarchitecture and hardware-assisted security architectures, that can practically address the inherent conflict between performance and security by allowing that the trade-off is configured to adapt to the desired requirements. In this thesis, we investigate the prospects and implications at the intersection of hardware and security that emerge across the full stack of our computing platforms and System-on-Chips (SoCs). On one front, we investigate how we can leverage hardware and its advantages, in contrast to software, to build more efficient and effective security extensions that serve security architectures, e.g., by providing execution attestation and enforcement, to protect the software from attacks exploiting software vulnerabilities. We further propose that they are microarchitecturally configured at runtime to provide different types of security services, thus adapting flexibly to different deployment requirements. On another front, we investigate how we can protect these hardware-assisted security architectures and extensions themselves from microarchitectural and software attacks that exploit design flaws that originate in the hardware, e.g., insecure resource sharing in SoCs. More particularly, we focus in this thesis on cache-based side-channel attacks, where we propose sophisticated cache designs, that fundamentally mitigate these attacks, while still preserving performance by enabling that the performance security trade-off is configured by design. We also investigate how these can be incorporated into flexible and customizable security architectures, thus complementing them to further support a wide spectrum of emerging applications with different performance/security requirements. Lastly, we inspect our computing platforms further beneath the design layer, by scrutinizing how the actual implementation of these mechanisms is yet another potential attack surface. We explore how the security of hardware designs and implementations is currently analyzed prior to fabrication, while shedding light on how state-of-the-art hardware security analysis techniques are fundamentally limited, and the potential for improved and scalable approaches

    Virtuosity in Computationally Creative Musical Performance for Bass Guitar

    Get PDF
    This thesis focuses on the development and implementation of a theory for a computationally creative musical performance system aimed at producing virtuosic interpretations of musical pieces for performance on bass guitar. This theory has been developed and formalised using Wiggins’ Creative Systems Framework (CSF) and uses case-base reasoning (CBR) and an engagement-reflection cycle to adorn monophonic musical note sequences with explicit performance directions, selected to maximise the virtuosity when performed using a bass guitar. A survey of 497 bass players’ playing competences was conducted and used to develop a playing complexity rating for adorned musical pieces. Measures of musical similarity used within the case-base reasoning were assessed by a listening test of 12 participants. A study into the perceived difficulty of bass performances was also conducted and an appropriate model of perceived bass playing difficulty determined. The complexity rating and perceived playing difficulties are utilised within the heuristic used by the system to determine what performances are considered to be virtuosic. The output of the system was rendered on a digital waveguide model of an electric bass, that was updated with newly developed digital waveguide synthesis methods for advanced bass guitar playing techniques. These audio renderings were evaluated with a perceptual study of 60 participants, the results of which were used to validate the heuristic used within the system. This research makes contribution to the fields of Computational Creativity (CC), AI Music Creativity, Music Information Retrieval and Musicology. It demonstrates how the CSF can be used as a tool to aid in designing computationally creative musical performance systems, provides a method to assess musical complexity and perceived difficulty of bass guitar performances, tested a suitable musical similarity measure for use within creative systems, and made advances in bass guitar digital waveguide synthesis methods
    • …
    corecore