476,839 research outputs found

    Use of ontologies and semantic web to provide for the transparency of qualifications frameworks

    Get PDF
    The problem of correlating and comparing the levels of the European and national qualifications framework and the potential of Semantic Web technologies for solving this problem were explored. We substantiated the need for creating models and methods, aimed at providing transparency of the European and national qualifications frameworks and the development of tools for implementing these methods. Authors proposed a reference model of the qualifications framework that formalizes knowledge of basic information objects relating to learning outcomes and their representation in the qualifications frameworks. The specific feature of this model implies using atomic competencies: semantics of information objects of different classes is formalized through a set of such atomic competences that are associated with different properties of these objects. This should provide for the automatic juxtaposing of these information objects on the level of knowledge. The methods of calculating measures of semantic proximity between information objects of different classes of ontological models, which corresponds to different problems, are proposed in the work. This allows identifying a similarity between learning outcomes, which are described with the use of descriptors of different qualification frameworks

    Anatomy of Deep Learning Image Classification and Object Detection on Commercial Edge Devices: A Case Study on Face Mask Detection

    Get PDF
    © 2022 IEEE. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/Developing efficient on-the-edge Deep Learning (DL) applications is a challenging and non-trivial task, as first different DL models need to be explored with different trade-offs between accuracy and complexity, second, various optimization options, frameworks and libraries are available that need to be explored, third, a wide range of edge devices are available with different computation and memory constraints. As such, trade-offs arise among inference time, energy consumption, efficiency (throughput/watt) and value (throughput/dollar). To shed some light in this problem, a case study is delivered where seven Image Classification (IC) and six Object Detection (OD) State-of-The-Art (SOTA) DL models were used to detect face masks on the following commercial off-the-shelf edge devices: Raspberry PI 4, Intel Neural Compute Stick 2, Jetson Nano, Jetson Xavier NX, and i.MX 8M Plus. First, a full end-to-end video pipeline face mask wearing detection architecture is developed. Then, the thirteen DL models were optimized, evaluated and compared on the edge devices, in terms of accuracy and inference time. To leverage the computational power of the edge devices, the models have been optimized, first, by using the SOTA optimization frameworks (TensorFlow Lite, OpenVINO, TensorRT, eIQ) and, second, by evaluating/comparing different optimization options, e.g., different levels of quantization. Note that the five edge devices are evaluated and compared too, in terms of inference time, value and efficiency. Last, we obtain insightful observations on which optimization frameworks, libraries and options to use and on how to select the right device depending on the target metric (inference time, efficiency and value). For example, we show that Jetson Xavier NX platform is the best in terms of latency and efficiency (FPS/Watt), while Jetson Nano is the best in terms of value (FPS/$).Peer reviewe

    Multi Sentence Description of Complex Manipulation Action Videos

    Full text link
    Automatic video description requires the generation of natural language statements about the actions, events, and objects in the video. An important human trait, when we describe a video, is that we are able to do this with variable levels of detail. Different from this, existing approaches for automatic video descriptions are mostly focused on single sentence generation at a fixed level of detail. Instead, here we address video description of manipulation actions where different levels of detail are required for being able to convey information about the hierarchical structure of these actions relevant also for modern approaches of robot learning. We propose one hybrid statistical and one end-to-end framework to address this problem. The hybrid method needs much less data for training, because it models statistically uncertainties within the video clips, while in the end-to-end method, which is more data-heavy, we are directly connecting the visual encoder to the language decoder without any intermediate (statistical) processing step. Both frameworks use LSTM stacks to allow for different levels of description granularity and videos can be described by simple single-sentences or complex multiple-sentence descriptions. In addition, quantitative results demonstrate that these methods produce more realistic descriptions than other competing approaches

    How do programs work to improve child nutrition?: Program impact pathways of three nongovernmental organization intervention projects in the Peruvian highlands

    Get PDF
    This paper examines the program logic of three nongovernmental, community-based programs with different intervention models to reduce childhood stunting. Two programs, Child Nutrition Program (PNI) and Good Start, focused directly on education and behavior change among caregivers, or the short routes to achieve impact, while one program, Sustainable Networks for Food Security (REDESA), focused on upstream factors, such as improving local governance and coordination, improving water and sanitation, and increasing family incomes, or the long routes to achieve impact. We compared the logic of each program as it was explicitly documented to the logic as perceived by the implementers. We elucidated the program impact pathways (PIPs) of key activities by actors at different operational levels in each program to identify congruencies and gaps in the perceptions of causal mechanisms between program activities and their intended outcomes, and analyzed them with the simple program models and logical frameworks to highlight the methodology and utility of PIPs. In a desire to move beyond static input-out models of the three programs, we designed and conducted data collection activities (document review, semi-structured interviews, and observations) with the intention of gaining insights about those aspects of the program that brought causal mechanisms of a given program into clearer focus. We propose that different methods for eliciting PIPs may be necessary at different operational levels. The interview method elicited more complete responses among those who are familiar with programmatic concepts, whereas actors at the local operational level provided sparse and fragmentary responses, even when simple, common language was used during the interviews. Group participatory processes, using visual aids, may be more effective for mapping the perceptions of those who are not accustomed to articulating information about programs. To reduce the length and frequency of interviews with program actors, initial PIPs could also be constructed from program documents, then discussed and revised iteratively with program actors. Although program logic models and the logical frameworks provide a succinct overview of the program (for communication, strategic planning, and management), we found that PIPs provide a better representation of the causal connections between program activities and results, particularly when both upstream and direct intervention activities were part of the same program. PIPs provide a visual tool for tracking how activities were perceived to work and make an impact, bringing into focus the different pathways of the activities and influences along the way. Beyond the logical sequence of program inputs, outputs, and outcomes, the conceptualization of impact pathways is a useful approach for understanding the causal connections required for impact and for identifying where attention and reinforcements may be required within program operation. The utility of this tool warrants its use not only during final evaluation but also during mid-program monitoring and relevant assessments. National- and regional-level program actors had good understanding of the overarching frameworks and principles of their respective programs as well as the program components and activities. They demonstrated a strong coherence to the program documents, provided similar cohesive responses, and were able to articulate the impact pathways. However, program actors at the national level identified fewer facilitators and barriers along the impact pathways than did the local actors, revealing that the practical dimensions of the impact pathways were not as evident to planners and managers farther from the communities. Although program actors at the local level were more apt to provide practical examples of influencing factors or incidents that occur during implementation, they had difficulty fully articulating their perceived PIPs and provided fragmented views of how the activities linked to their outcomes. Similar patterns were found across the three programs. This finding raises the question of desirability of a common understanding of the goals and pathways by which these outcomes are achieved or the acceptability of diversity of perspectives. It is still unclear whether program effectiveness may be improved through greater congruency in the PIPs. Future research should elucidate how congruency of PIPs among program actors across operational levels could be increased, and whether greater congruency would improve program implementation and effectiveness.program impact pathway, program logic model, logical framework, childhood stunting, child nutrition programs,

    Use of ontologies and the semantic web for qualifications framework transparency

    Get PDF
    The problem of correlating and comparing the levels of the European and national qualifications framework and the potential of the Semantic Web technologies for solving this problem were explored. We substantiated the need for creating models and methods, aimed at providing transparency of the European and national qualifications frameworks and the development of tools for implementing these methods. Authors proposed a reference model of the qualifications frame-work that formalizes knowledge of basic information objects relating to learning outcomes and their epresentation in the qualifications frameworks. The specific feature of this model implies using atomic competencies: semantics of information objects of different classes is formalized through the set of such atomic competences that are associated with different properties of these objects. This should provide for the automatic matching of these information objects on the level of knowledge. The methods of quantitative estimation of semantic proximity between information objects of different classes of ontological models, which corresponds to different problems, are proposed in the work. This allows identifying a similarity between learning outcomes, which are described with the use of descriptors of different qualification frameworks. Information regarding atomic competences is obtained from the national and European standards, qualifications frameworks, spe-ciality descriptions, etc. They may be automatically supplemented via analysis of relevant information of Web-resources that contain semantic markup. The work considers in detail the mechanism of integration of the reference information model of competences with technological en-vironment Semantic MediaWiki: ontological concepts and relations are used for semantic markup of Wiki-pages by categories and se-mantic properties. This allows running a variety of semantic queries to the content of pages, relating to learning outcomes. Examples of such queries are given and their expressive power is analyzed. An example of using the ontological model of competences for improving semantic Web-search for the information for the purpose of supplementing and updating Wiki-pages was studied. The ontol-ogy potential in specification of information needs and the increased intersection of the obtained results is demonstrated with the ex-ample of the semantic search engine MAIPS. Keywords: qualifications framework, ontology of competences, Wiki, semantic markup, semantic search

    Report of the Study Group on Age-length Structured Assessment Models (SGASAM)

    Get PDF
    Contributor: Daniel HowellThe second meeting of the ICES Study Group on Age-length Structured Assessment Models (SGASAM) was held at ICES Headquarters from 14-18 March 2005. There were 12 participants (mainly with expertise in age-length structured modelling and stock assessment) from 10 countries. The main objective of SGASAM is to investigate and evaluate the use of lengthstructured and age-length structured population models in fish stock assessment. The terms of reference for this meeting related to both model development and species-specific applications. Developments in methodologies and applications. An increasing number of age-length structured models are being developed (e.g. GADGET, CALEN, Stock Synthesis) and applied to a wide variety of species with differing life-histories. The acceptance of these for use in stock assessment is increasing, particularly outside of the ICES area. There is growing interest in the development of simpler length-structured models for the assessment of species for which age-structured data are unavailable and in particular, a number of length-structured models have been developed which make use only of lengthstructured survey data to obtain information on stock trends. These are clearly useful for stocks for which commercial catch data may also be unreliable. Incorporating process sub-models. Process models previously developed by the ICES Study Group on Growth, Maturity and Condition in Stock Projections were specifically considered. Many of these are lengthdependent and some, particularly for growth and maturity, have already been included into existing age-length structured modelling frameworks (e.g. GADGET). Further improvements to the implementation of these process models in age-length structured population models (important for the assessment of species where biological and fishery processes are better represented by length) will require greater co-operation between process modellers and agelength structured population modellers. It is therefore recommended that process modellers be encouraged to attend any further meetings of this SG. Investigating complexity. The SG identified two different ways in which age-length structured model frameworks could be used to investigate the performance of models with different levels of complexity. One approach was to consider age-length structured models as operating models to generate data sets and then evaluate other simpler models (e.g. VPA, biomass dynamic) in terms of their performance against the underlying ‘true’ system and perhaps also in terms of relative performance against alternative management regime. The second approach was the comparison of different sub-models within the same framework and comparing their performance in terms of ‘goodness of fit’ to the underlying data. There is a need for development of formal statistical methods to carry out these comparisons. Case studies. The alternative to more complexity is the development of simpler length-based approaches for species for which age-disaggregated data are sparse or unavailable. A number of species for which there are age-reading uncertainties (and hence limited age-based data) were considered by the SG and the development of length-structured models is already in progress for some of these. The SG felt that continuing work on such simpler approaches is important and would be particularly useful for the assessment of species such as Nephrops, redfish, anglerfish and some elasmobranchs

    Cultural issues, organisational hierarchy and information fulfilment: an exploration of relationships

    Get PDF
    Purpose – The purpose of this paper is to present the cultural results of a three year study into the concept of information fulfilment and considers the impact of culture on levels of information fulfilment. Design/methodology/approach – Ethnographic studies were undertaken within higher education institutions in four countries, by examining each organization's shape and comparing it with the level of achievement of information fulfilment. The social and symbolic meanings that underpinned the culture of information in the chosen institutions are presented. The cultural frameworks are analysed and followed by a section of “raw data” from the ethnographic field. Findings – Culture impacted significantly in all the studies, and each study had its own unique character which provided rich insights into the culture, atmosphere and contexts of the fields. Originality/value – The relationships between the cultures and the levels of information fulfilment are reported with a view to helping build knowledge management systems that deliver higher levels of information fulfilment
    corecore