277 research outputs found

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    RSS v2.0: Spamming, User Experience and Formalization

    Get PDF
    RSS, once the most popular publish/subscribe system is believed to have come to an end due to reasons unexplored yet. The aim of this thesis is to examine one such reason, spamming. The context of this thesis is limited to spamming related to RSS v2.0. The study discusses RSS as a publish/subscribe system and investigates the possible reasons for the decline in the use of such a system and possible solutions to address RSS spamming. The thesis introduces RSS (being dependent on feed readers) and tries to find its relationship with spamming. In addition, the thesis tries to investigate possible socio-technical influences on spamming in RSS. The author presents the idea of applying formalization (formal specification technique) to open standards, RSSv2.0 in particular. Formal specifications are more concise, consistent, unambiguous and highly reusable in many cases. The merging of formal specification methods and open standards allows for i) a more concrete standard design, ii) an improved understanding of the environment under design, iii) an enforced certain level of precision into the specification, and also iv) provides software engineers with extended property checking/verification capabilities. The author supports and proposes the use of formalization in RSS. Based on the inferences gathered from the user experiment conducted during the course of this study, an analysis on the downfall of RSS is presented. However, the user experiment opens up different directions for future work in the evolution of RSS v3.0 which could be supported by formalization. The thesis concludes that RSS is on the verge of death/discontinuation due to the adverse effects of spamming and lack of its development which is evident from the limited amount of available research literature. RSS Feeds is a perfect example of what happens to a software if it fails to evolve itself with time

    D1.1 Analysis Report on Federated Infrastructure and Application Profile

    Get PDF
    Kawese, R., Fisichella, M., Deng, F., Friedrich, M., Niemann, K., Börner, D., Holtkamp, P., Hun-Ha, K., Maxwell, K., Parodi, E., Pawlowski, J., Pirkkalainen, H., Rodrigo, C., & Schwertel, U. (2010). D1.1 Analysis Report on Federated Infrastructure and Application Profile. OpenScout project deliverable.The present deliverable aims to report on functionalities of the first step of the described process. In other words, the deliverable describes how the consortium will gather the learning objects metadata, centralize the access to existing learning resources and form a suitable application profile which will contribute to a proper and suitable modeling, retrieval and presentation of the required information (regarding the learning objects) to the interested users. The described approach is the foundation for the federated, skill-based search and learning object retrieval. The deliverable focuses on reporting the analysis of the available repositories and the best infrastructure that can support OpenScout’s initiative. The deliverable explains the motivations behind the chosen infrastructure based on the study of available information and previous research and literature.The work on this publication has been sponsored by the OpenScout (Skill based scouting of open user-generated and community-improved content for management education and training) Targeted Project that is funded by the European Commission’s 7th Framework Programme. Contract ECP-2008-EDU-42801

    Interoperability in Digital Libraries

    Get PDF
    This chapter present the principles and practices of interoperability – the ability of systems to work together – as it pertains to digital libraries. While there is no well-defined theoretical basis for interoperability, it has gradually emerged as a major aspect in the creation of digital library systems, particularly in modern digital repositories such as those adopted by the Open Access movement. The need for standardisation is a key element of interoperability, and is considered in tandem with the more technical elements. Principles of interoperability have emerged through experimentation and any future attempts to infuse interoperability into a system should build on these principles, such as simplicity and orthogonality. In practice, experiments with system and protocols have demonstrated what works and what does not and where there is a need for additional interventions, such as the successful OAI-PMH and RSS standards. The key interoperability technologies currently in use in digital library systems are introduced and contextualised in terms of their applicability and motivations. In this discussion, the line between digital library standards and Web standards is intentionally fuzzy because of the increasingly symbiotic relationship between these communities

    An Analysis of Data Quality Defects in Podcasting Systems

    Get PDF
    Podcasting has emerged as an asynchronous delay-tolerant method for the distribution of multimedia files through a network. Although podcasting has become a popular Internet application, users encounter frequent information quality problems in podcasting systems. To better understand the severity of these quality problems, we have applied the Total Data Quality Management methodology to podcasting. Through the application of this methodology we have quantified the data quality problems inherent within podcasting metadata, and performed an analysis that maps specific metadata defects to failures in popular commercial podcasting platforms. Furthermore, we extracted the Really Simple Syndication (RSS) feeds from the iTunes catalog for the purpose of performing the most comprehensive measurement of podcasting metadata to date. From these findings we attempted to improve the quality of podcasting data through the creation of a metadata validation tool - PodCop. PodCop extends existing RSS validation tools and encapsulates validation rules specific to the context of podcasting. We believe PodCop is the first attempt at improving the overall health of the podcasting ecosyste

    Services approach & overview general tools and resources

    Get PDF
    The contents of this deliverable are split into three groups. Following an introduction, a concept and vision is sketched on how to establish the necessary natural language processing (NLP) services including the integration of existing resources. Therefore, an overview on the state-of-the-art is given, incorporating technologies developed by the consortium partners and beyond, followed by the service approach and a practical example. Second, a concept and vision on how to create interoperability for the envisioned learning tools to allow for a quick and painless integration into existing learning environment(s) is elaborated. Third, generic paradigms and guidelines for service integration are provided.The work on this publication has been sponsored by the LTfLL STREP that is funded by the European Commission's 7th Framework Programme. Contract 212578 [http://www.ltfll-project.org

    An Analysis of Data Quality Defects in Podcasting Systems

    Get PDF
    Podcasting has emerged as an asynchronous delay-tolerant method for the distribution of multimedia files through a network. Although podcasting has become a popular Internet application, users encounter frequent information quality problems in podcasting systems. To better understand the severity of these quality problems, we have applied the Total Data Quality Management methodology to podcasting. Through the application of this methodology we have quantified the data quality problems inherent within podcasting metadata, and performed an analysis that maps specific metadata defects to failures in popular commercial podcasting platforms. Furthermore, we extracted the Really Simple Syndication (RSS) feeds from the iTunes catalog for the purpose of performing the most comprehensive measurement of podcasting metadata to date. From these findings we attempted to improve the quality of podcasting data through the creation of a metadata validation tool - PodCop. PodCop extends existing RSS validation tools and encapsulates validation rules specific to the context of podcasting. We believe PodCop is the first attempt at improving the overall health of the podcasting ecosyste

    An Analysis of Data Quality Defects in Podcasting Systems

    Get PDF
    Podcasting has emerged as an asynchronous delay-tolerant method for the distribution of multimedia files through a network. Although podcasting has become a popular Internet application, users encounter frequent information quality problems in podcasting systems. To better understand the severity of these quality problems, we have applied the Total Data Quality Management methodology to podcasting. Through the application of this methodology we have quantified the data quality problems inherent within podcasting metadata, and performed an analysis that maps specific metadata defects to failures in popular commercial podcasting platforms. Furthermore, we extracted the Really Simple Syndication (RSS) feeds from the iTunes catalog for the purpose of performing the most comprehensive measurement of podcasting metadata to date. From these findings we attempted to improve the quality of podcasting data through the creation of a metadata validation tool - PodCop. PodCop extends existing RSS validation tools and encapsulates validation rules specific to the context of podcasting. We believe PodCop is the first attempt at improving the overall health of the podcasting ecosyste

    A Competence-Based Course Authoring Concept for Learning Platforms with Legacy Assignment Tools

    Get PDF
    This paper is concerned with several of the most important aspects of Competence-Based Learning (CBL): course authoring, assignments, and categorization of learning content. The latter is part of the so-called Bologna Process (BP) and can effectively be supported by integrating knowledge resources like, e.g., standardized skill and competence taxonomies into the target implementation approach, aiming at making effective use of an open integration architecture while fostering the interoperability of hybrid knowledge-based e-learning solutions. Modern scenarios ask for interoperable software solutions to seamlessly integrate existing e-learning infrastructures and legacy tools with innovative technologies while being cognitively efficient to handle. In this way, prospective users are enabled to use them without learning overheads. At the same time, methods of Learning Design (LD) in combination with CBL are getting more and more important for production and maintenance of easy to facilitate solutions. We present our approach of developing a competence-based course-authoring and assignment support software. It is bridging the gaps between contemporary Learning Management Systems (LMS) and established legacy learning infrastructures by embedding existing resources via Learning Tools Interoperability (LTI). Furthermore, the underlying conceptual architecture for this integration approach will be explained. In addition, a competence management structure based on knowledge technologies supporting standardized skill and competence taxonomies will be introduced. The overall goal is to develop a software solution which will not only flawlessly merge into a legacy platform and several other learning environments, but also remain intuitively usable. As a proof of concept, the so-called platform independent conceptual architecture model will be validated by a concrete use case scenario
    corecore