34 research outputs found

    ACCESSing Advanced National Supercomputing and Storage Resources for Computational Research

    Get PDF
    This presentation will cover ACCESS (Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support), and Kennesaw State University\u27s involvement in Open Science Data Federation program as a data origin to help researchers and educators with or without supporting grants to utilize the nation’s advanced computing systems and services. ACCESS, a program established and funded by the National Science Foundation, is an ecosystem with capabilities for new modes of research and further democratizing participation. The presentation covers how to apply for allocations on ACCESS. The last part of the presentation will briefly explain Open Science Data Federation and Kennesaw State University\u27s involvement as a data origin. OSDF is an OSG (Open Science Grid) service to support the sharing of files staged in autonomous “origins” for efficient access to those files from anywhere in the world via a global namespace and network of caches

    Kennesaw State University HPC Facilities and Resources

    Get PDF
    The Kennesaw State University High Performance Computing (HPC) resources represent the University’s commitment to research computing. This resource contains verbiage for users of Kennesaw State University\u27s HPC resources to include in their grants and publications. Please use the recommended citation rather than including the listed authors in the your citations

    A Multistage Framework for Detection of Very Small Objects

    Get PDF
    Small object detection is one of the most challenging problems in computer vision. Algorithms based on state-of-the-art object detection methods such as R-CNN, SSD, FPN, and YOLO fail to detect objects of very small sizes. In this study, we propose a novel method to detect very small objects, smaller than 8Ă—8 pixels, that appear in a complex background. The proposed method is a multistage framework consisting of an unsupervised algorithm and three separately trained supervised algorithms. The unsupervised algorithm extracts ROIs from a high-resolution image. Then the ROIs are upsampled using SRGAN, and the enhanced ROIs are detected by our two-stage cascade classifier based on two ResNet50 models. The maximum size of the images used for training the proposed framework is 32Ă—32 pixels. The experiments are conducted using rescaled German Traffic Sign Recognition Benchmark dataset (GTSRB) and downsampled German Traffic Sign Detection Benchmark dataset (GTSDB). Unlike MS COCO and DOTA datasets, the resulting GTSDB turns out to be very challenging for any small object detection algorithm due to not only the size of objects of interest but also the complex textures of the background. Our experimental results show that the proposed method detects small traffic signs with an average precision of 0.332 at the intersection over union of 0.3

    A Survey of Social Network Forensics

    Get PDF
    Social networks in any form, specifically online social networks (OSNs), are becoming a part of our everyday life in this new millennium especially with the advanced and simple communication technologies through easily accessible devices such as smartphones and tablets. The data generated through the use of these technologies need to be analyzed for forensic purposes when criminal and terrorist activities are involved. In order to deal with the forensic implications of social networks, current research on both digital forensics and social networks need to be incorporated and understood. This will help digital forensics investigators to predict, detect and even prevent any criminal activities in different forms. It will also help researchers to develop new models / techniques in the future. This paper provides literature review of the social network forensics methods, models, and techniques in order to provide an overview to the researchers for their future works as well as the law enforcement investigators for their investigations when crimes are committed in the cyber space. It also provides awareness and defense methods for OSN users in order to protect them against to social attacks

    ExplainabilityAudit: An Automated Evaluation of Local Explainability in Rooftop Image Classification

    Get PDF
    Explainable Artificial Intelligence (XAI) is a key concept in building trustworthy machine learning models. Local explainability methods seek to provide explanations for individual predictions. Usually, humans must check these explanations manually. When large numbers of predictions are being made, this approach does not scale. We address this deficiency for a rooftop classification problem specifically with ExplainabilityAudit, a method that automatically evaluates explanations generated by a local explainability toolkit and identifies rooftop images that require further auditing by a human expert. The proposed method utilizes explanations generated by the Local Interpretable Model-Agnostic Explanations (LIME) framework as the most important superpixels of each validation rooftop image during the prediction. Then a bag of image patches is extracted from the superpixels to determine their texture and evaluate the local explanations. Our results show that 95.7% of the cases to be audited are detected by the proposed system

    Directional Pairwise Class Confusion Bias and Its Mitigation

    Get PDF
    Recent advances in Natural Language Processing have led to powerful and sophisticated models like BERT (Bidirectional Encoder Representations from Transformers) that have bias. These models are mostly trained on text corpora that deviate in important ways from the text encountered by a chatbot in a problem-specific context. While a lot of research in the past has focused on measuring and mitigating bias with respect to protected attributes (stereotyping like gender, race, ethnicity, etc.), there is lack of research in model bias with respect to classification labels. We investigate whether a classification model hugely favors one class with respect to another. We introduce a bias evaluation method called directional pairwise class confusion bias that highlights the chatbot intent classification model’s bias on pairs of classes. Finally, we also present two strategies to mitigate this bias using example biased pairs
    corecore