10 research outputs found

    RELLISUR: A Real Low-Light Image Super-Resolution Dataset

    Get PDF
    The RELLISUR dataset contains real low-light low-resolution images paired with normal-light high-resolution reference image counterparts. This dataset aims to fill the gap between low-light image enhancement and low-resolution image enhancement (Super-Resolution (SR)) which is currently only being addressed separately in the literature, even though the visibility of real-world images is often limited by both low-light and low-resolution. The dataset contains 12750 paired images of different resolutions and degrees of low-light illumination, to facilitate learning of deep-learning based models that can perform a direct mapping from degraded images with low visibility to high-quality detail rich images of high resolution

    MultIOD: Rehearsal-free Multihead Incremental Object Detector

    Full text link
    Class-Incremental learning (CIL) is the ability of artificial agents to accommodate new classes as they appear in a stream. It is particularly interesting in evolving environments where agents have limited access to memory and computational resources. The main challenge of class-incremental learning is catastrophic forgetting, the inability of neural networks to retain past knowledge when learning a new one. Unfortunately, most existing class-incremental object detectors are applied to two-stage algorithms such as Faster-RCNN and rely on rehearsal memory to retain past knowledge. We believe that the current benchmarks are not realistic, and more effort should be dedicated to anchor-free and rehearsal-free object detection. In this context, we propose MultIOD, a class-incremental object detector based on CenterNet. Our main contributions are: (1) we propose a multihead feature pyramid and multihead detection architecture to efficiently separate class representations, (2) we employ transfer learning between classes learned initially and those learned incrementally to tackle catastrophic forgetting, and (3) we use a class-wise non-max-suppression as a post-processing technique to remove redundant boxes. Without bells and whistles, our method outperforms a range of state-of-the-art methods on two Pascal VOC datasets.Comment: Under review at the WACV 2024 conferenc

    Revealing More Details: Image Super-Resolution for Real-World Applications

    Get PDF

    A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness

    Full text link
    People increasingly use videos on the Web as a source for learning. To support this way of learning, researchers and developers are continuously developing tools, proposing guidelines, analyzing data, and conducting experiments. However, it is still not clear what characteristics a video should have to be an effective learning medium. In this paper, we present a comprehensive review of 257 articles on video-based learning for the period from 2016 to 2021. One of the aims of the review is to identify the video characteristics that have been explored by previous work. Based on our analysis, we suggest a taxonomy which organizes the video characteristics and contextual aspects into eight categories: (1) audio features, (2) visual features, (3) textual features, (4) instructor behavior, (5) learners activities, (6) interactive features (quizzes, etc.), (7) production style, and (8) instructional design. Also, we identify four representative research directions: (1) proposals of tools to support video-based learning, (2) studies with controlled experiments, (3) data analysis studies, and (4) proposals of design guidelines for learning videos. We find that the most explored characteristics are textual features followed by visual features, learner activities, and interactive features. Text of transcripts, video frames, and images (figures and illustrations) are most frequently used by tools that support learning through videos. The learner activity is heavily explored through log files in data analysis studies, and interactive features have been frequently scrutinized in controlled experiments. We complement our review by contrasting research findings that investigate the impact of video characteristics on the learning effectiveness, report on tasks and technologies used to develop tools that support learning, and summarize trends of design guidelines to produce learning video

    Enhancing Text Annotation with Few-shot and Active Learning: A Comprehensive Study and Tool Development

    Get PDF
    The exponential growth of digital communication channels such as social media and messaging platforms has resulted in an unprecedented influx of unstructured text data, thereby underscoring the need for Natural Language Processing (NLP) techniques. NLP-based techniques play a pivotal role in the analysis and comprehension of human language, facilitating the processing of unstructured text data, and allowing tasks like sentiment analysis, entity recognition, and text classification. NLP-driven applications are made possible due to the advancements in deep learning models. However, deep learning models require a large amount of labeled data for training, thereby making labeled data an indispensable component of these models. Retrieving labeled data can be a major challenge as the task of annotating large amounts of data is laborious and error-prone. Often, professional experts are hired for task-specific data annotation, which can be prohibitively expensive and time-consuming. Moreover, the annotation process can be subjective and lead to inconsistencies, resulting in models that are biased and less accurate. This thesis presents a comprehensive study of few-shot and active learning strategies, systems that combine the two techniques, and current text annotation tools while proposing a solution that addresses the aforementioned challenges through the integration of these methods. The proposed solution is an efficient text annotation platform that leverages Few-shot and Active Learning techniques. It has the potential to assist the field of text annotation by enabling organizations to process vast amounts of unstructured text data efficiently. Also, this research paves the way for inspiring ideas and promising growth opportunities in the future of this field
    corecore