12,032 research outputs found

    A lightweight web video model with content and context descriptions for integration with linked data

    Get PDF
    The rapid increase of video data on the Web has warranted an urgent need for effective representation, management and retrieval of web videos. Recently, many studies have been carried out for ontological representation of videos, either using domain dependent or generic schemas such as MPEG-7, MPEG-4, and COMM. In spite of their extensive coverage and sound theoretical grounding, they are yet to be widely used by users. Two main possible reasons are the complexities involved and a lack of tool support. We propose a lightweight video content model for content-context description and integration. The uniqueness of the model is that it tries to model the emerging social context to describe and interpret the video. Our approach is grounded on exploiting easily extractable evolving contextual metadata and on the availability of existing data on the Web. This enables representational homogeneity and a firm basis for information integration among semantically-enabled data sources. The model uses many existing schemas to describe various ontology classes and shows the scope of interlinking with the Linked Data cloud

    Methodological considerations concerning manual annotation of musical audio in function of algorithm development

    Get PDF
    In research on musical audio-mining, annotated music databases are needed which allow the development of computational tools that extract from the musical audiostream the kind of high-level content that users can deal with in Music Information Retrieval (MIR) contexts. The notion of musical content, and therefore the notion of annotation, is ill-defined, however, both in the syntactic and semantic sense. As a consequence, annotation has been approached from a variety of perspectives (but mainly linguistic-symbolic oriented), and a general methodology is lacking. This paper is a step towards the definition of a general framework for manual annotation of musical audio in function of a computational approach to musical audio-mining that is based on algorithms that learn from annotated data. 1

    Formalization and Validation of Safety-Critical Requirements

    Full text link
    The validation of requirements is a fundamental step in the development process of safety-critical systems. In safety critical applications such as aerospace, avionics and railways, the use of formal methods is of paramount importance both for requirements and for design validation. Nevertheless, while for the verification of the design, many formal techniques have been conceived and applied, the research on formal methods for requirements validation is not yet mature. The main obstacles are that, on the one hand, the correctness of requirements is not formally defined; on the other hand that the formalization and the validation of the requirements usually demands a strong involvement of domain experts. We report on a methodology and a series of techniques that we developed for the formalization and validation of high-level requirements for safety-critical applications. The main ingredients are a very expressive formal language and automatic satisfiability procedures. The language combines first-order, temporal, and hybrid logic. The satisfiability procedures are based on model checking and satisfiability modulo theory. We applied this technology within an industrial project to the validation of railways requirements

    Um método e uma ferramenta para testes baseados em modelos para linhas de produto software

    Get PDF
    Orientador: Eliane MartinsDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: As linhas de produtos de software (LPS) estão ganhando interesse devido à crescente demanda por produtos personalizáveis. Tal se deve, em parte, por que as LPS são um meio eficiente e efetivo de entregar produtos com maior qualidade a um custo menor. Em uma LPS, produtos têm requisitos em comum e também, características específicas a cada um. Testar se um produto implementa os requisitos comuns e específicos é um importante passo para garantir uma boa qualidade. No entanto, o teste de uma LPS é uma tarefa complexa, uma vez que a variedade de produtos que podem ser derivados a partir da combinação de características comuns e específicas é enorme. Mesmo que se escolha apenas alguns produtos selecionados, o esforço para testá-los ainda assim é grande, dado que os produtos variam em termos das características específicas selecionadas. Portanto, reutilizar casos de teste de um produto para o outro para determinar se satisfazem os requisitos funcionais, pode não ser possível. Os testes baseados em modelos (MBT) podem ser úteis neste caso, nos quais um modelo de comportamento pode ser obtido a partir dos requisitos e este modelo pode ser usado para a geração automática de casos de teste. Neste trabalho é apresentada uma abordagem em que os requisitos SPL são centrados em casos de uso. Casos de uso (UC) são um formato popular para representar os requisitos. A partir das descrições de casos de uso escritas em um formato semi-estruturado e contendo a especificação de variabilidade, os modelos de comportamento são gerados automaticamente para um produto sob teste, na forma de um modelo de máquina de estado. Construir uma máquina de estado não é trivial para a maioria dos profissionais, que estão mais habituados com descrições textuais e informais dos requisitos. Em geral, a criação manual de modelos de máquinas de estado a partir de UCs pode ser demorado e propenso a erros. O objetivo é fornecer aos engenheiros de teste um método que os guie na criação dos artefatos necessários para que uma versão preliminar de um modelo de estado seja extraída automaticamente dos requisitos. Este modelo preliminar pode ser refinado para tornar-se adequado para uma ferramenta de geração de casos de teste. Para esse processo de refinamento também são fornecidas algumas diretrizes. Como prova de conceito, desenvolveu-se um protótipo de uma ferramenta, MARITACA, que utiliza técnicas de processamento de língua natural para extrair as máquinas de estado a partir das descrições dos casos de uso. O texto apresenta o uso do método e da ferramenta em um exemplo ilustrativo, obtido da literatura, e em uma família de aplicações distribuídas tolerantes a falhas. Este estudo mostrou a aplicabilidade do método proposto. Uma das preocupações nos testes de SPL é a geração de casos de teste redundantes de um produto para outro. Os resultados, embora preliminares, mostraram que a maioria dos casos de teste gerados para um novo produto não são redundantes, pois envolvem características específicas de cada produtoAbstract: Software product lines (SPL) are gaining interest because of the increasing demand for customizable products. This is partly because SPLs are an efficient and effective means of delivering products with higher quality at a lower cost. In SPL, products have common requirements and also, specific features for each one. Testing whether a product implements common and specific requirements is an important step to ensure good quality of the derived products. However, testing a SPL is a complex task, since the variety of products that can be derived from the combination of common and specific features is huge. Even if only a few specific products are selected, the effort to test them is still significant, since the products vary in terms of the specific features that are selected. Therefore, reusing test cases from one product to another to determine whether they satisfy the functional requirements may not be possible. Model-based testing (MBT) may be useful in this case, in which a behavior model can be obtained from the requirements and this model can be used for automatic test cases generation. This work presents model-based product testing approach (MBPTA) for software product lines, in which requirements are centered on use cases. Use Cases (UC) are a popular format for representing requirements. From the use case descriptions written in the form of a semi-structured format and containing the variability specification, the behavior models are automatically generated for a product under test, in the form of a state machine model. Building a state machine is not a trivial task for most practitioners, who are more familiarized with textual and informal descriptions of requirements. In general, the manual creation of state machine models from UCs can be time-consuming and prone to errors. The goal is to provide the test engineers with a method that guides them in the creation of artifacts necessary to extract a preliminary version of a state model from the requirements. This preliminary model can be refined to become suitable for a test case generation tool. MBPTA also provides guidelines for the refinement process of the preliminary model. As proof of concept, a prototype of a tool was developed, MARITACA, which uses natural language processing techniques to extract state machines from the use case descriptions. The text presents the use of the method and the tool in an illustrative example, obtained from the literature, and in a family of distributed fault-tolerant applications. This study showed the applicability of the proposed method. One of the concerns in SPL testing is the generation of redundant test cases from one product to another. The results, though preliminary, showed that most of the test cases generated for a new product are not redundant because they involve specific features of each productMestradoCiência da ComputaçãoMestra em Ciência da ComputaçãoCAPE

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web
    • …
    corecore