731 research outputs found

    Towards addressing training data scarcity challenge in emerging radio access networks: a survey and framework

    Get PDF
    The future of cellular networks is contingent on artificial intelligence (AI) based automation, particularly for radio access network (RAN) operation, optimization, and troubleshooting. To achieve such zero-touch automation, a myriad of AI-based solutions are being proposed in literature to leverage AI for modeling and optimizing network behavior to achieve the zero-touch automation goal. However, to work reliably, AI based automation, requires a deluge of training data. Consequently, the success of the proposed AI solutions is limited by a fundamental challenge faced by cellular network research community: scarcity of the training data. In this paper, we present an extensive review of classic and emerging techniques to address this challenge. We first identify the common data types in RAN and their known use-cases. We then present a taxonomized survey of techniques used in literature to address training data scarcity for various data types. This is followed by a framework to address the training data scarcity. The proposed framework builds on available information and combination of techniques including interpolation, domain-knowledge based, generative adversarial neural networks, transfer learning, autoencoders, fewshot learning, simulators and testbeds. Potential new techniques to enrich scarce data in cellular networks are also proposed, such as by matrix completion theory, and domain knowledge-based techniques leveraging different types of network geometries and network parameters. In addition, an overview of state-of-the art simulators and testbeds is also presented to make readers aware of current and emerging platforms to access real data in order to overcome the data scarcity challenge. The extensive survey of training data scarcity addressing techniques combined with proposed framework to select a suitable technique for given type of data, can assist researchers and network operators in choosing the appropriate methods to overcome the data scarcity challenge in leveraging AI to radio access network automation

    Water and Brain Function: Effects of Hydration Status on Neurostimulation and Neurorecording

    Get PDF
    Introduction: TMS and EEG are used to study normal neurophysiology, diagnose, and treat clinical neuropsychiatric conditions, but can produce variable results or fail. Both techniques depend on electrical volume conduction, and thus brain volumes. Hydration status can affect brain volumes and functions (including cognition), but effects on these techniques are unknown. We aimed to characterize the effects of hydration on TMS, EEG, and cognitive tasks. Methods: EEG and EMG were recorded during single-pulse TMS, paired-pulse TMS, and cognitive tasks from 32 human participants on dehydrated (12-hour fast/thirst) and rehydrated (1 Liter oral water ingestion in 1 hour) testing days. Hydration status was confirmed with urinalysis. MEP, ERP, and network analyses were performed to examine responses at the muscle, brain, and higher-order functioning. Results: Rehydration decreased motor threshold (increased excitability) and shifted the motor hotspot. Significant effects on TMS measures occurred despite being re-localized and re-dosed to these new parameters. Rehydration increased SICF of the MEP, magnitudes of specific TEP peaks in inhibitory protocols, specific ERP peak magnitudes and reaction time during the cognitive task. Rehydration amplified nodal inhibition around the stimulation site in inhibitory paired-pulse networks and strengthened nodes outside the stimulation site in excitatory and CSP networks. Cognitive performance was not improved by rehydration, although similar performance was achieved with generally weaker network activity. Discussion: Results highlight differences between mild dehydration and rehydration. The rehydrated brain was easier to stimulate with TMS and produced larger responses to external and internal stimuli. This is explainable by the known physiology of body water dynamics, which encompass macroscopic and microscopic volume changes. Rehydration can shift 3D cortical positioning, decrease scalp cortex distance (bringing cortex closer to stimulator/recording electrodes), and cause astrocyte swelling-induced glutamate release. Conclusions: Previously unaccounted variables like osmolarity, astrocyte and brain volumes likely affect neurostimulation/neurorecording. Controlling for and carefully manipulating hydration may reduce variability and improve therapeutic outcomes of neurostimulation. Dehydration is common and produces less excitable circuits. Rehydration should offer a mechanism to macroscopically bring target cortical areas closer to an externally applied neurostimulation device to recruit greater volumes of tissue and microscopically favor excitability in the stimulated circuits

    Advances in Modelling of Rainfall Fields

    Get PDF
    Rainfall is the main input for all hydrological models, such as rainfall–runoff models and the forecasting of landslides triggered by precipitation, with its comprehension being clearly essential for effective water resource management as well. The need to improve the modeling of rainfall fields constitutes a key aspect both for efficiently realizing early warning systems and for carrying out analyses of future scenarios related to occurrences and magnitudes for all induced phenomena. The aim of this Special Issue was hence to provide a collection of innovative contributions for rainfall modeling, focusing on hydrological scales and a context of climate changes. We believe that the contribution from the latest research outcomes presented in this Special Issue can shed novel insights on the comprehension of the hydrological cycle and all the phenomena that are a direct consequence of rainfall. Moreover, all these proposed papers can clearly constitute a valid base of knowledge for improving specific key aspects of rainfall modeling, mainly concerning climate change and how it induces modifications in properties such as magnitude, frequency, duration, and the spatial extension of different types of rainfall fields. The goal should also consider providing useful tools to practitioners for quantifying important design metrics in transient hydrological contexts (quantiles of assigned frequency, hazard functions, intensity–duration–frequency curves, etc.)

    Natural Fracture Evolution: Investigations into the Middle Devonian Marcellus Shale, Appalachian Basin, USA

    Get PDF
    Optimizing recovery from unconventional shale reservoirs has generated considerable research into optimal recovery methods through hydraulic fracturing design and shale reservoir characterization in the development of long-term hydrocarbon producers. Permeability at multiple scales from nanometer-scale pore sizes and nano-darcy permeability to completion-induced fractures defining a 100’s of meter stimulated reservoir volume plays a significant role in hydrocarbon flow during production in shale reservoirs. Preexisting cemented fractures in unconventional shale reservoirs are abundant and preferentially reactivate during induced hydraulic fracturing treatment to create necessary large-scale permeability. While previous investigations have significantly improved our knowledge of shale reservoirs, it has also highlighted the need for increased understanding of the geologic evolution and effect on hydraulic stimulation of pre-existing cemented fractures. This three-part dissertation examines natural fractures from four middle Devonian Marcellus Shale wells across the Appalachian basin through integration of visual core observation, thin section petrography, spectral gamma ray logs, borehole image logs, petrophysical logs, elemental data, and X-ray computed tomography cores. The research goals are: (1) to establish clues to assess natural fracture development in source rocks from kerogen maturation, relative timing, and hydrocarbon migration; (2) to investigate the relationship of natural fractures in wells of varying thermal maturity levels, and preferential fracture distribution in various clay types and redox environments; and (3) to characterize mineralized natural fractures in 3D using a medical CT-scan core to quantify volume and assess connectivity. This research indicates that overpressure from kerogen expulsion of hydrocarbon creates numerous cemented fractures filled with calcite and bitumen that achieve orientations related to the geologic burial stresses during their evolution, predominant in clay-rich units of certain redox conditions, cluster at geomechanical boundaries, and have inconsistent 3D volume changes within the core

    The Rapid Acquisition and Application of Geophysical Data to the Sustainable and Proficient Management of Shallow Aquifers and Cemeteries

    Get PDF
    Rapidly acquired non-invasive geophysical data is key to reducing the risk inherent in subsurface investigations. It achieves this risk reduction by provision of spatiotemporally dense datasets and new methods to measure the efficacy of acquisition, analysis, and modeling. In a first example, I use two geophysical methods—electrical resistivity tomography and time-domain electromagnetics—to investigate the subsurface in a rapidly urbanizing alluvial floodplain setting. Specifically I focus on the geologic structure of a shallow alluvial aquifer in the Brazos River floodplain of Texas, characterizing dynamic hydrological interactions between the aquifer and the adjacent river. Based on new geophysical insights, I determine how the sedimentary architecture of the shallow alluvial aquifer acts as a control on its recharge and discharge and how bidirectional preferential flow pathways establish hydrologic communication between the aquifer and the river at human and geologic time scales. In a second example, I develop a protocol to improve identification of unmarked graves in a historic African-American cemetery. I show that a geophysicist’s detection proficiency, expressed in terms of true-positive, true-negative, false-positive, and false-negative percentages, can be improved using radar signatures of nearby known targets as a proxy for ground-truth

    Kinesin-1 stepping dissected with MINFLUX and MINSTED tracking – Spontaneous blinkers for live-cell MINFLUX imaging

    Get PDF
    The fluorescence-based nanoscopy methods MINFLUX and MINSTED are currently revolutionizing the field of imaging and single molecule tracking by achieving molecular spatial precision with low photon numbers. Using a polymer-based in vitro assay with strongly reduced fluorescence background, I identified MINFLUX-compatible spontaneously blinking fluorophores by quantifying their blinking properties. Due to the spontaneous blinkers being live-cell compatible and having few milliseconds short on-events, I expect them to advance the imaging field by drastically accelerating MINFLUX measurements. The main focus of this work, however, is the application of the nanoscopy methods MINFLUX and MINSTED for tracking of the motor protein kinesin-1. Requiring an only ~1 nm small fluorophore as label, they are inherently less artefact-prone than established techniques which require the attachment of comparatively large beads for a similar spatio-temporal resolution. With an improved interferometric MINFLUX approach, we successfully resolved regular steps and substeps of the kinesin-1 stalk and heads. By discovering that ATP binds to the motor in the one-head-bound state and is hydrolyzed in the two-head-bound state, we aim to solve a long-standing controversy in the field. Furthermore, we deduced that when the rear head of kinesin-1 detaches from the microtubule, it rotates around its front into a rightward-displaced unbound state. In conjunction with an observed stalk rotation, I concluded the motor to walk in a symmetric hand-over-hand fashion. Finally, we successfully resolved the stepping of kinesin-1 with MINSTED, confirming many findings from the MINFLUX experiments and observing motor sidestepping and protofilament switching. These findings will prove helpful in developing treatments for diseases linked to malfunction of kinesins. Beyond that, this thesis establishes MINFLUX and MINSTED for the tracking of dynamic biological processes on the single molecule level

    Interplay of DNA replication, repair and chromatin: structure versus function

    Get PDF
    Hierarchical levels of chromatin organization allow different genomic functions to be spatio-temporally regulated within mammalian nuclei. Both DNA replication and DNA repair are global genomic processes. Their chromatin units have remarkable structural similarities and microscopically appear as clusters of nanofocal structures, each in the size range of chromatin loops. The present work aimed at relating genomic functions with the underlying structural organization by the two chromatin architectural proteins CTCF and cohesin, which cooperate to shape the genome into chromatin loops and domains. Here, CTCF was shown to be critical for cellular survival after ionizing irradiation in a CTCF-dose dependent way. The results obtained in different cell lines upon CTCF-depletion were integrated into a biophysical model. The decreased clonogenic potential showed to derive from the increased probability of double strand breaks to cluster in larger chromatin domains lacking CTCF at their borders. Moreover, CTCF proved to be enriched at the sites and at the time of DNA replication. CTCF intensity within replication foci was shown to decrease over a chase time after replication labeling, suggesting the occurrence of CTCF accumulation during ongoing DNA replication. The depletion of CTCF correlated with an impairment in cell cycle progression. CTCF-depleted cells stalled in G1 in a CTCF-dose dependent way, indicating that the chromatin structure provided by CTCF might be needed to properly enter S-phase. Additionally, CTCF resulted to be particularly enriched at the replicating inactive X and Y chromosomes. The depletion of CTCF led to the loss of synchrony in the DNA replication of the Y chromosome. Additionally, Y chromosome architecture showed changes of volume and shape upon CTCF reduction. In the second part of this work, the cohesin subunit RAD21 was shown essential to determine the structure of chromatin loops. RAD21-depleted cells exhibited an increase in the size and shape heterogeneity of chromatin loops. The cohesin component SA1 was investigated for a role in DNA damage signaling. SA1 KO cells showed an impairment of the γH2AX foci at all tested X-ray doses. The repair functional units decreased in number, volume and intensity in the absence of SA1. In conclusion, the results presented here led to propose that the functions of DNA replication and repair are determined by the chromatin architecture, with the structure dictating the function. Future work should further investigate the mechanisms behind the regulation of global genomic functions by these two chromatin architectural proteins and define the precise interplay between cohesin and CTCF within such regulation

    Application of deep learning methods in materials microscopy for the quality assessment of lithium-ion batteries and sintered NdFeB magnets

    Get PDF
    Die Qualitätskontrolle konzentriert sich auf die Erkennung von Produktfehlern und die Überwachung von Aktivitäten, um zu überprüfen, ob die Produkte den gewünschten Qualitätsstandard erfüllen. Viele Ansätze für die Qualitätskontrolle verwenden spezialisierte Bildverarbeitungssoftware, die auf manuell entwickelten Merkmalen basiert, die von Fachleuten entwickelt wurden, um Objekte zu erkennen und Bilder zu analysieren. Diese Modelle sind jedoch mühsam, kostspielig in der Entwicklung und schwer zu pflegen, während die erstellte Lösung oft spröde ist und für leicht unterschiedliche Anwendungsfälle erhebliche Anpassungen erfordert. Aus diesen Gründen wird die Qualitätskontrolle in der Industrie immer noch häufig manuell durchgeführt, was zeitaufwändig und fehleranfällig ist. Daher schlagen wir einen allgemeineren datengesteuerten Ansatz vor, der auf den jüngsten Fortschritten in der Computer-Vision-Technologie basiert und Faltungsneuronale Netze verwendet, um repräsentative Merkmale direkt aus den Daten zu lernen. Während herkömmliche Methoden handgefertigte Merkmale verwenden, um einzelne Objekte zu erkennen, lernen Deep-Learning-Ansätze verallgemeinerbare Merkmale direkt aus den Trainingsproben, um verschiedene Objekte zu erkennen. In dieser Dissertation werden Modelle und Techniken für die automatisierte Erkennung von Defekten in lichtmikroskopischen Bildern von materialografisch präparierten Schnitten entwickelt. Wir entwickeln Modelle zur Defekterkennung, die sich grob in überwachte und unüberwachte Deep-Learning-Techniken einteilen lassen. Insbesondere werden verschiedene überwachte Deep-Learning-Modelle zur Erkennung von Defekten in der Mikrostruktur von Lithium-Ionen-Batterien entwickelt, von binären Klassifizierungsmodellen, die auf einem Sliding-Window-Ansatz mit begrenzten Trainingsdaten basieren, bis hin zu komplexen Defekterkennungs- und Lokalisierungsmodellen, die auf ein- und zweistufigen Detektoren basieren. Unser endgültiges Modell kann mehrere Klassen von Defekten in großen Mikroskopiebildern mit hoher Genauigkeit und nahezu in Echtzeit erkennen und lokalisieren. Das erfolgreiche Trainieren von überwachten Deep-Learning-Modellen erfordert jedoch in der Regel eine ausreichend große Menge an markierten Trainingsbeispielen, die oft nicht ohne weiteres verfügbar sind und deren Beschaffung sehr kostspielig sein kann. Daher schlagen wir zwei Ansätze vor, die auf unbeaufsichtigtem Deep Learning zur Erkennung von Anomalien in der Mikrostruktur von gesinterten NdFeB-Magneten basieren, ohne dass markierte Trainingsdaten benötigt werden. Die Modelle sind in der Lage, Defekte zu erkennen, indem sie aus den Trainingsdaten indikative Merkmale von nur "normalen" Mikrostrukturmustern lernen. Wir zeigen experimentelle Ergebnisse der vorgeschlagenen Fehlererkennungssysteme, indem wir eine Qualitätsbewertung an kommerziellen Proben von Lithium-Ionen-Batterien und gesinterten NdFeB-Magneten durchführen
    • …
    corecore