6,967 research outputs found

    Development of Pop-Up Book Learning Media to Improve Understanding of Teaching

    Get PDF
    Research uses the development of Borg and Gall model development. The product produced in the development of grade 4 pop-up books in elementary schools in Sidoarjo, Indonesia. The results of the development of learning media by the criteria with the results of content expert validation have a 100% validity percentage. Validation from design experts has a 90% validity percentage, Validation results from linguists have a 100% validity percentage, individual results are found 100%, small group trial results are 97.14%, large group trial results are 98.21%. The results of the t-test analysis using SPSS 16 with a significance level of 0.05 indicate the p-value of the t-test statistic is 0.00, which means (<0.05), it can be concluded that there is a significant significance in the pretest mean values ​​and posttest. This shows the pop-up book learning media developed effectively in increasing students' understanding of Class 4 in Sidoarjo, Indonesia

    Aspect-Controlled Neural Argument Generation

    Full text link
    We rely on arguments in our daily lives to deliver our opinions and base them on evidence, making them more convincing in turn. However, finding and formulating arguments can be challenging. In this work, we train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect. We define argument aspect detection as a necessary method to allow this fine-granular control and crowdsource a dataset with 5,032 arguments annotated with aspects. Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments. Moreover, these arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments. We publish all datasets and code to fine-tune the language model

    A Lifespan and Beyond – Essay in Honor of Wolfgang Mitter

    Get PDF

    IoT Networks: Using Machine Learning Algorithm for Service Denial Detection in Constrained Application Protocol

    Get PDF
    The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed

    Middleware Technologies for Cloud of Things - a survey

    Get PDF
    The next wave of communication and applications rely on the new services provided by Internet of Things which is becoming an important aspect in human and machines future. The IoT services are a key solution for providing smart environments in homes, buildings and cities. In the era of a massive number of connected things and objects with a high grow rate, several challenges have been raised such as management, aggregation and storage for big produced data. In order to tackle some of these issues, cloud computing emerged to IoT as Cloud of Things (CoT) which provides virtually unlimited cloud services to enhance the large scale IoT platforms. There are several factors to be considered in design and implementation of a CoT platform. One of the most important and challenging problems is the heterogeneity of different objects. This problem can be addressed by deploying suitable "Middleware". Middleware sits between things and applications that make a reliable platform for communication among things with different interfaces, operating systems, and architectures. The main aim of this paper is to study the middleware technologies for CoT. Toward this end, we first present the main features and characteristics of middlewares. Next we study different architecture styles and service domains. Then we presents several middlewares that are suitable for CoT based platforms and lastly a list of current challenges and issues in design of CoT based middlewares is discussed.Comment: http://www.sciencedirect.com/science/article/pii/S2352864817301268, Digital Communications and Networks, Elsevier (2017

    Middleware Technologies for Cloud of Things - a survey

    Full text link
    The next wave of communication and applications rely on the new services provided by Internet of Things which is becoming an important aspect in human and machines future. The IoT services are a key solution for providing smart environments in homes, buildings and cities. In the era of a massive number of connected things and objects with a high grow rate, several challenges have been raised such as management, aggregation and storage for big produced data. In order to tackle some of these issues, cloud computing emerged to IoT as Cloud of Things (CoT) which provides virtually unlimited cloud services to enhance the large scale IoT platforms. There are several factors to be considered in design and implementation of a CoT platform. One of the most important and challenging problems is the heterogeneity of different objects. This problem can be addressed by deploying suitable "Middleware". Middleware sits between things and applications that make a reliable platform for communication among things with different interfaces, operating systems, and architectures. The main aim of this paper is to study the middleware technologies for CoT. Toward this end, we first present the main features and characteristics of middlewares. Next we study different architecture styles and service domains. Then we presents several middlewares that are suitable for CoT based platforms and lastly a list of current challenges and issues in design of CoT based middlewares is discussed.Comment: http://www.sciencedirect.com/science/article/pii/S2352864817301268, Digital Communications and Networks, Elsevier (2017

    Unlock Multi-Modal Capability of Dense Retrieval via Visual Module Plugin

    Full text link
    This paper proposes Multi-modAl Retrieval model via Visual modulE pLugin (MARVEL) to learn an embedding space for queries and multi-modal documents to conduct retrieval. MARVEL encodes queries and multi-modal documents with a unified encoder model, which helps to alleviate the modality gap between images and texts. Specifically, we enable the image understanding ability of a well-trained dense retriever, T5-ANCE, by incorporating the image features encoded by the visual module as its inputs. To facilitate the multi-modal retrieval tasks, we build the ClueWeb22-MM dataset based on the ClueWeb22 dataset, which regards anchor texts as queries, and exact the related texts and image documents from anchor linked web pages. Our experiments show that MARVEL significantly outperforms the state-of-the-art methods on the multi-modal retrieval dataset WebQA and ClueWeb22-MM. Our further analyses show that the visual module plugin method is tailored to enable the image understanding ability for an existing dense retrieval model. Besides, we also show that the language model has the ability to extract image semantics from image encoders and adapt the image features in the input space of language models. All codes are available at https://github.com/OpenMatch/MARVEL

    NLP and ML Methods for Pre-processing, Clustering and Classification of Technical Logbook Datasets

    Get PDF
    Technical logbooks are a challenging and under-explored text type in automated event identification. These texts are typically short and written in non-standard yet technical language, posing challenges to off-the-shelf NLP pipelines. These datasets typically represent a domain (a technical field such as automotive) and an application (e.g., maintenance). The granularity of issue types described in these datasets additionally leads to class imbalance, making it challenging for models to accurately predict which issue each logbook entry describes. In this research, we focus on the problem of technical issue pre-processing, clustering, and classification by considering logbook datasets from the automotive, aviation, and facility maintenance domains. We developed MaintNet, a collaborative open source library including logbook datasets from various domains and a pre-processing pipeline to clean unstructured datasets. Additionally, we adapted a feedback loop strategy from computer vision for handling extreme class imbalance, which resamples the training data based on its error in the prediction process. We further investigated the benefits of using transfer learning from sources within the same domain (but different applications), from within the same application (but different domains), and from all available data to improve the performance of the classification models. Finally, we evaluated several data augmentation approaches including synonym replacement, random swap, and random deletion to address the issue of data scarcity in technical logbooks

    Automated analysis of heterogeneous catalyst materials using deep learning

    Get PDF
    Heterogeneous catalyst materials play a key role in modern society, as many processes in the chemical and energy industry rely on them. Optimising their performance is directly connected to a large potential of reductions in energy consumption, and thus to a more sustainable future. A fundamental part in the optimisation process is represented by materials characterisation. This is often done using in situ (Scanning) Transmission Electron Microscopy ((S)TEM) imaging in order to obtain a full understanding of the catalyst performance in different environments and temperatures at high resolution. However, the analysis of corresponding dynamical datasets is often time-consuming and requires manual intervention alongside tailored post-processing routines. At the same time, the emergence of direct electron detectors allowing for the acquisition of datasets at kiloHertz frame rates, as well as novel imaging techniques raised data generation rates significantly and created a need for new, reliable and automated data processing techniques. This work introduces nNPipe as Deep Learning based method for the automated analysis of morphologically diverse heterogeneous catalyst systems. The method is based on two Covolutional Neural Networks (CNNs) that were exclusively trained on computationally generated HRTEM image simulations and allow for rapid and precise analysis of raw 2048 x 2048 experimental HRTEM images. The performance of nNPipe is demonstrated in a realistic automated imaging scenario where statistically significant material properties are inferred accurately. Moreover, time-efficient and reproducible retraining methods based small experimental datasets are described for both, further performance improvements and adaption to new imaging scenarios. In this context, a potentially new pathway for the generation of suitable training datasets obtained by thousands of mostly non-expert annotations is highlighted. Finally, the analytical capacities of the nNPipe method are showcased on time-resolved datasets in two advanced applications scenarios: i) Live image analysis during sample acquisition and ii) Analysis of the particle coalescence of an in situ heated Pd/C catalyst

    Examining The Factors That Affect ERP Assimilation

    Get PDF
    The aim of this study is to identify the factors that influence the assimilation of enterprise resource planning (ERP) systems in the post-implementation stage. Building on organizational information processing theory (OIPT) and absorptive capacity (AC), we propose an integrated model, which examines the relationship among organizational fit, absorptive capacity, environmental uncertainty, and ERP assimilation. Based on the survey data from 98 firms that have implemented ERP, most of the proposed hypotheses were supported, showing that initial fit, potential AC, realized AC, and heterogeneity jointly affect ERP assimilation. Task uncertainty (hostility and heterogeneity) negatively moderates the relationship between initial fit and ERP assimilation. The implications for both theory and practice are discussed
    corecore