50 research outputs found

    Toward the Cross-Institutional Data Integration From Shibboleth Federated LMS

    Get PDF
    Through this study, we aim to examine a method for data integration in shared Learning Management System (LMS) in authentication federation. We proposed a method of transmitting ePTID and learning data with user’s consent as a method for data integration across institutions. The method is compared with the other existing methods to realize the shared LMS. We discuss the suitable method for next version of GakuNinMoodle and conclude that our requirements are not fully satisfied by a single method

    Geolocation-centric Information Platform for Resilient Spatio-temporal Content Management

    Get PDF
    In IoT era, the growth of data variety is driven by cross-domain data fusion. In this paper, we advocate that “local production for local consumption (LPLC) paradigm” can be an innovative approach in cross-domain data fusion, and propose a new framework, geolocation-centric information platform (GCIP) that can produce and deliver diverse spatio-temporal content (STC). In the GCIP, (1) infrastructure-based geographic hierarchy edge network and (2) adhoc-based STC retention system are interplayed to provide both of geolocation-awareness and resiliency. Then, we discussed the concepts and the technical challenges of the GCIP. Finally, we implemented a proof-of-concepts of GCIP and demonstrated its efficacy through practical experiments on campus IPv6 network and simulation experiments

    BlackWatch:increasing attack awareness within web applications

    Get PDF
    Web applications are relied upon by many for the services they provide. It is essential that applications implement appropriate security measures to prevent security incidents. Currently, web applications focus resources towards the preventative side of security. Whilst prevention is an essential part of the security process, developers must also implement a level of attack awareness into their web applications. Being able to detect when an attack is occurring provides applications with the ability to execute responses against malicious users in an attempt to slow down or deter their attacks. This research seeks to improve web application security by identifying malicious behaviour from within the context of web applications using our tool BlackWatch. The tool is a Python-based application which analyses suspicious events occurring within client web applications, with the objective of identifying malicious patterns of behaviour. This approach avoids issues typically encountered with traditional web application firewalls. Based on the results from a preliminary study, BlackWatch was effective at detecting attacks from both authenticated, and unauthenticated users. Furthermore, user tests with developers indicated BlackWatch was user friendly, and was easy to integrate into existing applications. Future work seeks to develop the BlackWatch solution further for public release

    Developing a Simulation Model for Autonomous Driving Education in the Robobo SmartCity Framework

    Get PDF
    Abstract: This paper focuses on long-term education in Artificial Intelligence (AI) applied to robotics. Specifically, it presents the Robobo SmartCity educational framework. It is based on two main elements: the smartphone-based robot Robobo and a real model of a smart city. We describe the development of a simulation model of Robobo SmartCity in the CoppeliaSim 3D simulator, implementing both the real mock-up and the model of Robobo. In addition, a set of Python libraries that allow teachers and students to use state-of-the-art algorithms in their education projects is described too.Ministerio de Ciencia, Innovación y Universidades of Spain/FEDER; t RTI2018-101114-B-I00 Erasmus+ Programme of the European Union; 2019-1-ES01-KA201-065742, Centro de Investigación de Galicia “CITIC”; ED431G 2019/01

    A Decade of Code Comment Quality Assessment: A Systematic Literature Review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes

    A video-based technique for heart rate and eye blinks rate estimation: A potential solution for telemonitoring and remote healthcare

    Get PDF
    11noopenCurrent telemedicine and remote healthcare applications foresee different interactions between the doctor and the patient relying on the use of commercial and medical wearable sensors and internet-based video conferencing platforms. Nevertheless, the existing applications necessarily require a contact between the patient and sensors for an objective evaluation of the patient’s state. The proposed study explored an innovative video-based solution for monitoring neurophysiological parameters of potential patients and assessing their mental state. In particular, we investigated the possibility to estimate the heart rate (HR) and eye blinks rate (EBR) of participants while performing laboratory tasks by mean of facial—video analysis. The objectives of the study were focused on: (i) assessing the effectiveness of the proposed technique in estimating the HR and EBR by comparing them with laboratory sensor-based measures and (ii) assessing the capability of the video—based technique in discriminating between the participant’s resting state (Nominal condition) and their active state (Non-nominal condition). The results demonstrated that the HR and EBR estimated through the facial—video technique or the laboratory equipment did not statistically differ (p > 0.1), and that these neurophysiological parameters allowed to discriminate between the Nominal and Non-nominal states (p <0.02).openRonca V.; Giorgi A.; Rossi D.; Di Florio A.; Di Flumeri G.; Aricò P.; Sciaraffa N.; Vozzi A.; Tamborra L.; Simonetti I.; Borghini G.Ronca, V.; Giorgi, A.; Rossi, D.; Di Florio, A.; Di Flumeri, G.; Aricò, P.; Sciaraffa, N.; Vozzi, A.; Tamborra, L.; Simonetti, I.; Borghini, G

    A decade of code comment quality assessment : a systematic literature review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes, with evaluations often involving surveys of students and the authors of the original studies but rarely professional developers

    A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness

    Full text link
    People increasingly use videos on the Web as a source for learning. To support this way of learning, researchers and developers are continuously developing tools, proposing guidelines, analyzing data, and conducting experiments. However, it is still not clear what characteristics a video should have to be an effective learning medium. In this paper, we present a comprehensive review of 257 articles on video-based learning for the period from 2016 to 2021. One of the aims of the review is to identify the video characteristics that have been explored by previous work. Based on our analysis, we suggest a taxonomy which organizes the video characteristics and contextual aspects into eight categories: (1) audio features, (2) visual features, (3) textual features, (4) instructor behavior, (5) learners activities, (6) interactive features (quizzes, etc.), (7) production style, and (8) instructional design. Also, we identify four representative research directions: (1) proposals of tools to support video-based learning, (2) studies with controlled experiments, (3) data analysis studies, and (4) proposals of design guidelines for learning videos. We find that the most explored characteristics are textual features followed by visual features, learner activities, and interactive features. Text of transcripts, video frames, and images (figures and illustrations) are most frequently used by tools that support learning through videos. The learner activity is heavily explored through log files in data analysis studies, and interactive features have been frequently scrutinized in controlled experiments. We complement our review by contrasting research findings that investigate the impact of video characteristics on the learning effectiveness, report on tasks and technologies used to develop tools that support learning, and summarize trends of design guidelines to produce learning video

    GNSS-free outdoor localization techniques for resource-constrained IoT architectures : a literature review

    Get PDF
    Large-scale deployments of the Internet of Things (IoT) are adopted for performance improvement and cost reduction in several application domains. The four main IoT application domains covered throughout this article are smart cities, smart transportation, smart healthcare, and smart manufacturing. To increase IoT applicability, data generated by the IoT devices need to be time-stamped and spatially contextualized. LPWANs have become an attractive solution for outdoor localization and received significant attention from the research community due to low-power, low-cost, and long-range communication. In addition, its signals can be used for communication and localization simultaneously. There are different proposed localization methods to obtain the IoT relative location. Each category of these proposed methods has pros and cons that make them useful for specific IoT systems. Nevertheless, there are some limitations in proposed localization methods that need to be eliminated to meet the IoT ecosystem needs completely. This has motivated this work and provided the following contributions: (1) definition of the main requirements and limitations of outdoor localization techniques for the IoT ecosystem, (2) description of the most relevant GNSS-free outdoor localization methods with a focus on LPWAN technologies, (3) survey the most relevant methods used within the IoT ecosystem for improving GNSS-free localization accuracy, and (4) discussion covering the open challenges and future directions within the field. Some of the important open issues that have different requirements in different IoT systems include energy consumption, security and privacy, accuracy, and scalability. This paper provides an overview of research works that have been published between 2018 to July 2021 and made available through the Google Scholar database.5311-8814-F0ED | Sara Maria da Cruz Maia de Oliveira PaivaN/

    Y-DWMS - A digital watermark management system based on smart contracts

    Get PDF
    With the development of information technology, films, music, and other publications are inclined to be distributed in digitalized form. However, the low cost of data replication and dissemination leads to digital rights problems and brings huge economic losses. Up to now, existing digital rights management (DRM) schemes have been powerless to deter attempts of infringing digital rights and recover losses of copyright holders. This paper presents a YODA-based digital watermark management system (Y-DWMS), adopting non-repudiation of smart contract and blockchain, to implement a DRM mechanism to infinitely amplify the cost of infringement and recover losses copyright holders suffered once the infringement is reported. We adopt game analysis to prove that in Y-DWMS, the decision of non-infringement always dominates rational users, so as to fundamentally eradicate the infringement of digital rights, which current mainstream DRM schemes cannot reach
    corecore