327 research outputs found

    Joint Asymptotics for Estimating the Fractal Indices of Bivariate Gaussian Processes

    Get PDF
    Multivariate (or vector-valued) processes are important for modeling multiple variables. The fractal indices of the components of the underlying multivariate process play a key role in characterizing the dependence structures and statistical properties of the multivariate process. In this paper, under the infill asymptotics framework, we establish joint asymptotic results for the increment-based estimators of bivariate fractal indices. Our main results quantitatively describe the effect of the cross- dependence structure on the performance of the estimators

    Augmenting Knowledge Transfer across Graphs

    Full text link
    Given a resource-rich source graph and a resource-scarce target graph, how can we effectively transfer knowledge across graphs and ensure a good generalization performance? In many high-impact domains (e.g., brain networks and molecular graphs), collecting and annotating data is prohibitively expensive and time-consuming, which makes domain adaptation an attractive option to alleviate the label scarcity issue. In light of this, the state-of-the-art methods focus on deriving domain-invariant graph representation that minimizes the domain discrepancy. However, it has recently been shown that a small domain discrepancy loss may not always guarantee a good generalization performance, especially in the presence of disparate graph structures and label distribution shifts. In this paper, we present TRANSNET, a generic learning framework for augmenting knowledge transfer across graphs. In particular, we introduce a novel notion named trinity signal that can naturally formulate various graph signals at different granularity (e.g., node attributes, edges, and subgraphs). With that, we further propose a domain unification module together with a trinity-signal mixup scheme to jointly minimize the domain discrepancy and augment the knowledge transfer across graphs. Finally, comprehensive empirical results show that TRANSNET outperforms all existing approaches on seven benchmark datasets by a significant margin

    Bagging Improves the Performance of Deep Learning-Based Semantic Segmentation with Limited Labeled Images: A Case Study of Crop Segmentation for High-Throughput Plant Phenotyping

    Get PDF
    Advancements in imaging, computer vision, and automation have revolutionized various fields, including field-based high-throughput plant phenotyping (FHTPP). This integration allows for the rapid and accurate measurement of plant traits. Deep Convolutional Neural Networks (DCNNs) have emerged as a powerful tool in FHTPP, particularly in crop segmentation—identifying crops from the background—crucial for trait analysis. However, the effectiveness of DCNNs often hinges on the availability of large, labeled datasets, which poses a challenge due to the high cost of labeling. In this study, a deep learning with bagging approach is introduced to enhance crop segmentation using high-resolution RGB images, tested on the NU-Spidercam dataset from maize plots. The proposed method outperforms traditional machine learning and deep learning models in prediction accuracy and speed. Remarkably, it achieves up to 40% higher Intersection-over-Union (IoU) than the threshold method and 11% over conventional machine learning, with significantly faster prediction times and manageable training duration. Crucially, it demonstrates that even small labeled datasets can yield high accuracy in semantic segmentation. This approach not only proves effective for FHTPP but also suggests potential for broader application in remote sensing, offering a scalable solution to semantic segmentation challenges. This paper is accompanied by publicly available source code

    A new image encryption based on hybrid heterogeneous time-delay chaotic systems

    Get PDF
    Chaos theory has been widely utilized in password design, resulting in an encryption algorithm that exhibits strong security and high efficiency. However, rapid advancements in cryptanalysis technology have rendered single system generated sequences susceptible to tracking and simulation, compromising encryption algorithm security. To address this issue, we propose an image encryption algorithm based on hybrid heterogeneous time-delay chaotic systems. Our algorithm utilizes a collection of sequences generated by multiple heterogeneous time-delay chaotic systems, rather than sequences from a single chaotic system. Specifically, three sequences are randomly assigned to image pixel scrambling and diffusion operations. Furthermore, the time-delay chaotic system comprises multiple hyperchaotic systems with positive Lyapunov exponents, exhibiting a more complex dynamic behavior than non-delay chaotic systems. Our encryption algorithm is developed by a plurality of time-delay chaotic systems, thereby increasing the key space, enhancing security, and making the encrypted image more difficult to crack. Simulation experiment results verify that our algorithm exhibits superior encryption efficiency and security compared to other encryption algorithms

    Reliability model of organization management chain of South-to-North Water Diversion Project during construction period

    Get PDF
    AbstractIn order to analyze the indispensability of the organization management chain of the South-to-North Water Diversion Project (SNWDP), two basic forms (series connection state and mixed state of both series connection and parallel connection) of the organization management chain can be abstracted. The indispensability of each form has been studied and is described in this paper. Through analysis of the reliability of the two basic forms, reliability models of the organization management chain in the series connection state and the mixed state of both series connection and parallel connection have been set up

    A Matrix Ensemble Kalman Filter-based Multi-arm Neural Network to Adequately Approximate Deep Neural Networks

    Full text link
    Deep Learners (DLs) are the state-of-art predictive mechanism with applications in many fields requiring complex high dimensional data processing. Although conventional DLs get trained via gradient descent with back-propagation, Kalman Filter (KF)-based techniques that do not need gradient computation have been developed to approximate DLs. We propose a multi-arm extension of a KF-based DL approximator that can mimic DL when the sample size is too small to train a multi-arm DL. The proposed Matrix Ensemble Kalman Filter-based multi-arm ANN (MEnKF-ANN) also performs explicit model stacking that becomes relevant when the training sample has an unequal-size feature set. Our proposed technique can approximate Long Short-term Memory (LSTM) Networks and attach uncertainty to the predictions obtained from these LSTMs with desirable coverage. We demonstrate how MEnKF-ANN can "adequately" approximate an LSTM network trained to classify what carbohydrate substrates are digested and utilized by a microbiome sample whose genomic sequences consist of polysaccharide utilization loci (PULs) and their encoded genes.Comment: 18 pages, 6 Figures, and 6 Table

    Dynamic Transfer Learning across Graphs

    Full text link
    Transferring knowledge across graphs plays a pivotal role in many high-stake domains, ranging from transportation networks to e-commerce networks, from neuroscience to finance. To date, the vast majority of existing works assume both source and target domains are sampled from a universal and stationary distribution. However, many real-world systems are intrinsically dynamic, where the underlying domains are evolving over time. To bridge the gap, we propose to shift the problem to the dynamic setting and ask: given the label-rich source graphs and the label-scarce target graphs observed in previous T timestamps, how can we effectively characterize the evolving domain discrepancy and optimize the generalization performance of the target domain at the incoming T+1 timestamp? To answer the question, for the first time, we propose a generalization bound under the setting of dynamic transfer learning across graphs, which implies the generalization performance is dominated by domain evolution and domain discrepancy between source and target domains. Inspired by the theoretical results, we propose a novel generic framework DyTrans to improve knowledge transferability across dynamic graphs. In particular, we start with a transformer-based temporal encoding module to model temporal information of the evolving domains; then, we further design a dynamic domain unification module to efficiently learn domain-invariant representations across the source and target domains. Finally, extensive experiments on various real-world datasets demonstrate the effectiveness of DyTrans in transferring knowledge from dynamic source domains to dynamic target domains

    An image encryption algorithm based on the double time-delay Lorenz system

    Get PDF
    The traditional image encryption technology has the disadvantages of low encryption efficiency and low security. According to the characteristics of image information, an image encryption algorithm based on double time-delay chaos is proposed by combining the delay chaotic system with traditional encryption technology. Because of the infinite dimension and complex dynamic behavior of the delayed chaotic system, it is difficult to be simulated by AI technology. Furthermore time delay and time delay position have also become elements to be considered in the key space. The proposed encryption algorithm has good quality. The stability and the existence condition of Hopf bifurcation of Lorenz system with double delay at the equilibrium point are studied by nonlinear dynamics theory, and the critical delay value of Hopf bifurcation is obtained. The system intercepts the pseudo-random sequence in chaotic state and encrypts the image by means of scrambling operation and diffusion operation. The algorithm is simulated and analyzed from key space size, key sensitivity, plaintext image sensitivity and plaintext histogram. The results show that the algorithm can produce satisfactory scrambling effect and can effectively encrypt and decrypt images without distortion. Moreover, the scheme is not only robust to statistical attacks, selective plaintext attacks and noise, but also has high stability

    Joint hierarchical models for sparsely sampled high-dimensional LiDAR and forest variables

    Get PDF
    Recent advancements in remote sensing technology, specifically Light Detection and Ranging (LiDAR) sensors, provide the data needed to quantify forest characteristics at a fine spatial resolution over large geographic domains. From an inferential standpoint, there is interest in prediction and interpolation of the often sparsely sampled and spatially misaligned LiDAR signals and forest variables. We propose a fully process-based Bayesian hierarchical model for above ground biomass (AGB) and LiDAR signals. The processbased framework offers richness in inferential capabilities, e.g., inference on the entire underlying processes instead of estimates only at pre-specified points. Key challenges we obviate include misalignment between the AGB observations and LiDAR signals and the high-dimensionality in the model emerging from LiDAR signals in conjunction with the large number of spatial locations. We offer simulation experiments to evaluate our proposed models and also apply them to a challenging dataset comprising LiDAR and spatially coinciding forest inventory variables collected on the Penobscot Experimental Forest (PEF), Maine. Our key substantive contributions include AGB data products with associated measures of uncertainty for the PEF and, more broadly, a methodology that should find use in a variety of current and upcoming forest variable mapping efforts using sparsely sampled remotely sensed high-dimensional data
    • …
    corecore