483 research outputs found

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    UMSL Bulletin 2023-2024

    Get PDF
    The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp

    UMSL Bulletin 2022-2023

    Get PDF
    The 2022-2023 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1087/thumbnail.jp

    Enhancing the forensic comparison process of common trace materials through the development of practical and systematic methods

    Get PDF
    An ongoing advancement in forensic trace evidence has driven the development of new and objective methods for comparing various materials. While many standard guides have been published for use in trace laboratories, different areas require a more comprehensive understanding of error rates and an urgent need for harmonizing methods of examination and interpretation. Two critical areas are the forensic examination of physical fits and the comparison of spectral data, which depend highly on the examiner’s judgment. The long-term goal of this study is to advance and modernize the comparative process of physical fit examinations and spectral interpretation. This goal is fulfilled through several avenues: 1) improvement of quantitative-based methods for various trace materials, 2) scrutiny of the methods through interlaboratory exercises, and 3) addressing fundamental aspects of the discipline using large experimental datasets, computational algorithms, and statistical analysis. A substantial new body of knowledge has been established by analyzing population sets of nearly 4,000 items representative of casework evidence. First, this research identifies material-specific relevant features for duct tapes and automotive polymers. Then, this study develops reporting templates to facilitate thorough and systematic documentation of an analyst’s decision-making process and minimize risks of bias. It also establishes criteria for utilizing a quantitative edge similarity score (ESS) for tapes and automotive polymers that yield relatively high accuracy (85% to 100%) and, notably, no false positives. Finally, the practicality and performance of the ESS method for duct tape physical fits are evaluated by forensic practitioners through two interlaboratory exercises. Across these studies, accuracy using the ESS method ranges between 95-99%, and again no false positives are reported. The practitioners’ feedback demonstrates the method’s potential to assist in training and improve peer verifications. This research also develops and trains computational algorithms to support analysts making decisions on sample comparisons. The automated algorithms in this research show the potential to provide objective and probabilistic support for determining a physical fit and demonstrate comparative accuracy to the analyst. Furthermore, additional models are developed to extract feature edge information from the systematic comparison templates of tapes and textiles to provide insight into the relative importance of each comparison feature. A decision tree model is developed to assist physical fit examinations of duct tapes and textiles and demonstrates comparative performance to the trained analysts. The computational tools also evaluate the suitability of partial sample comparisons that simulate situations where portions of the item are lost or damaged. Finally, an objective approach to interpreting complex spectral data is presented. A comparison metric consisting of spectral angle contrast ratios (SCAR) is used as a model to assess more than 94 different-source and 20 same-source electrical tape backings. The SCAR metric results in a discrimination power of 96% and demonstrates the capacity to capture information on the variability between different-source samples and the variability within same-source samples. Application of the random-forest model allows for the automatic detection of primary differences between samples. The developed threshold could assist analysts with making decisions on the spectral comparison of chemically similar samples. This research provides the forensic science community with novel approaches to comparing materials commonly seen in forensic laboratories. The outcomes of this study are anticipated to offer forensic practitioners new and accessible tools for incorporation into current workflows to facilitate systematic and objective analysis and interpretation of forensic materials and support analysts’ opinions

    Recombinant spidroins from infinite circRNA translation

    Get PDF
    Spidroins are a diverse family of peptides and the main components of spider silk. They can be used to produce sustainable, lightweight and durable materials for a large variety of medical and engineering applications. Spiders’ territorial behaviour and cannibalism precludes farming them for silk. Recombinant protein synthesis is the most promising way of producing these peptides. However, many approaches have been unsuccessful in obtaining large titres of recombinant spidroins or ones of sufficient molecular weight. The work described here is focused on expressing high molecular weight spidroins from short circular RNA molecules. Mammalian host cells were transfected with designed circular-RNA-producing plasmid vectors. A backsplicing approach was implemented to successfully circularise RNA in a variety of mammalian cell types. This approach could not express any recombinant spidroins based on a variety of qualitative protein assays. Further experiments investigated the reasons behind this. Additionally, due to the diversity of spidroins in a large number of spider lineages, there are potentially many spidroin sequences left to be discovered. A bioinformatic pipeline was developed that accepts transcriptome datasets from RNA sequencing and uses tandem repeat detection and profile HMM annotation to identify novel sequences. This pipeline was specifically designed for the identification of repeat domains in expressed sequences. 21 transcriptomes from 17 different species, encompassing a wide selection of basal and derived spider lineages, were investigated using this pipeline. Six previously undescribed spidroin sequences were discovered. This pipeline was additionally tested in the context of the suckerin protein family. These proteins have recently been investigated for their potential properties in medicine and engineering including adhesion in wet environments. The computational pipeline was able to double the number of suckerins known to date. Further phylogenetic analysis was implemented to expand on the knowledge of suckerins. This pipeline enables the identification of transcripts that may have been overlooked by more mainstream analysis methods such as pairwise homology searches. The spidroins and suckerins discovered by this pipeline may contribute to the large repertoire of potentially useful properties characteristic of this diverse peptide family

    2023-2024 Catalog

    Get PDF
    The 2023-2024 Governors State University Undergraduate and Graduate Catalog is a comprehensive listing of current information regarding:Degree RequirementsCourse OfferingsUndergraduate and Graduate Rules and Regulation

    The impact of semi-automated tools and machines on the attraction and retention of the New Zealand fruit industry workforce

    Get PDF
    Semi-automation is being implemented by agricultural sectors globally in a bid to reap the many benefits of the automated world and alleviate labour crises. There is a lack of data on the impact of semi-automation on the New Zealand fruit industry workforce, particularly regarding attraction and retention. This thesis addresses the gap by exploring both the impact of semi-automation on attraction and retention, and how it is perceived by the on-orchard workforce within the New Zealand fruit industry. The research questions for this study are (1) what is the impact of semi-automation on the attraction of New Zealand fruit industry on-orchard workforce? (2) what is the impact of semi-automation on the retention of the New Zealand fruit industry on-orchard workforce? (3) how does the New Zealand fruit industry on-orchard workforce perceive semi-automation? Purposive (non-probabilistic) sampling was used to select 20 participants from 5 stakeholder/employee groups across seven New Zealand fruit sectors. Semi-structured interviews were conducted and analysed using the General Inductive Approach. Four major themes emerged: (1) attraction and retention to the fruit industry, (2) the presence of semi-automation, (3) the impact of semi-automation, and (4) perception toward semi-automation. The findings show that where semi-automation is applied and supported, it positively impacts attraction and retention to the industry through a widened labour pool, improved health and safety, better working conditions and improved efficiency of tasks and information. This research provides a useful resource for Human Resource Management that captures current industry realities and recommendations for responding to the agricultural revolution

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    ACORN: Input Validation for Secure Aggregation

    Get PDF
    Secure aggregation enables a server to learn the sum of client-held vectors in a privacy-preserving way, and has been successfully applied to distributed statistical analysis and machine learning. In this paper, we both introduce a more efficient secure aggregation construction and extend secure aggregation by enabling input validation, in which the server can check that clients\u27 inputs satisfy required constraints such as L0L_0, L2L_2, and LL_\infty bounds. This prevents malicious clients from gaining disproportionate influence on the computed aggregated statistics or machine learning model. Our new secure aggregation protocol improves the computational efficiency of the state-of-the-art protocol of Bell et al. (CCS 2020) both asymptotically and concretely: we show via experimental evaluation that it results in 22-88X speedups in client computation in practical scenarios. Likewise, our extended protocol with input validation improves on prior work by more than 3030X in terms of client communiation (with comparable computation costs). Compared to the base protocols without input validation, the extended protocols incur only 0.10.1X additional communication, and can process binary indicator vectors of length 11M, or 16-bit dense vectors of length 250250K, in under 8080s of computation per client

    Human-Scene Network: A Novel Baseline with Self-rectifying Loss for Weakly supervised Video Anomaly Detection

    Get PDF
    Video anomaly detection in surveillance systems with only video-level labels (i.e. weakly-supervised) is challenging. This is due to, (i) complex integration of human and scene based anomalies comprising of subtle and sharp spatio-temporal cues in real-world scenarios, (ii) non-optimal optimization between normal and anomaly instances under weak-supervision. In this paper, we propose a Human-Scene Network to learn discriminative representations by capturing both subtle and strong cues in a dissociative manner. In addition, a self-rectifying loss is also proposed that dynamically computes the pseudo temporal-annotations from video-level labels for optimizing the Human-Scene Network effectively. The proposed Human-Scene Network optimized with self-rectifying loss is validated on three publicly available datasets i.e. UCF-Crime, ShanghaiTech and IITB-Corridor, outperforming recently reported state-of-the-art approaches on five out of the six scenarios considered
    corecore