8,757 research outputs found

    Challenges for Monocular 6D Object Pose Estimation in Robotics

    Full text link
    Object pose estimation is a core perception task that enables, for example, object grasping and scene understanding. The widely available, inexpensive and high-resolution RGB sensors and CNNs that allow for fast inference based on this modality make monocular approaches especially well suited for robotics applications. We observe that previous surveys on object pose estimation establish the state of the art for varying modalities, single- and multi-view settings, and datasets and metrics that consider a multitude of applications. We argue, however, that those works' broad scope hinders the identification of open challenges that are specific to monocular approaches and the derivation of promising future challenges for their application in robotics. By providing a unified view on recent publications from both robotics and computer vision, we find that occlusion handling, novel pose representations, and formalizing and improving category-level pose estimation are still fundamental challenges that are highly relevant for robotics. Moreover, to further improve robotic performance, large object sets, novel objects, refractive materials, and uncertainty estimates are central, largely unsolved open challenges. In order to address them, ontological reasoning, deformability handling, scene-level reasoning, realistic datasets, and the ecological footprint of algorithms need to be improved.Comment: arXiv admin note: substantial text overlap with arXiv:2302.1182

    Implications of the blockchain technology adoption by additive symbiotic networks

    Get PDF
    Funding Information: Funding: This work was supported by Fundação para a Ciência e Tecnologia, Lisboa, Portugal [Grant No SFRH/BD/145448/2019 and via the project UIDB/00667/2020 (UNIDEMI)]. Publisher Copyright: © 2023A vibrant debate has been initiated around the potential adoption of blockchain technology for enhancing the development of industrial symbiosis networks, particularly for promoting the creation of additive symbiotic networks. Despite the potential benefits of trust creation and elimination of intermediary entities, adopting such innovative technologies promises to disrupt the current supply chains of those symbiotic networks. The literature on these topics is still beginning; thus, the present research intends to contribute. A framework for understanding the implications of adopting the blockchain technology in the supply chain structure (specifically, in the dependency dimension) of an additive symbiotic network was developed, considering a network theory lens. The case study method was deemed to be suitable for carrying out this research. A case study related to an additive symbiotic network is described in detail, with the development of two scenarios: scenario I “as-is” for the current state of the network and scenario II “to-be” considering the adoption of the blockchain technology. Results show that adopting blockchain technology impacts the supply chain structure of additive symbiotic networks. More specifically, there are implications for the power distribution among the network's stakeholders.publishersversionpublishe

    a systematic review

    Get PDF
    Funding Information: This study is part of an interdisciplinary research project, funded by the Special Research Fund (Bijzonder Onderzoeksfonds) of Ghent University.Introduction: Ontologies are a formal way to represent knowledge in a particular field and have the potential to transform the field of health promotion and digital interventions. However, few researchers in physical activity (PA) are familiar with ontologies, and the field can be difficult to navigate. This systematic review aims to (1) identify ontologies in the field of PA, (2) assess their content and (3) assess their quality. Methods: Databases were searched for ontologies on PA. Ontologies were included if they described PA or sedentary behavior, and were available in English language. We coded whether ontologies covered the user profile, activity, or context domain. For the assessment of quality, we used 12 criteria informed by the Open Biological and Biomedical Ontology (OBO) Foundry principles of good ontology practice. Results: Twenty-eight ontologies met the inclusion criteria. All ontologies covered PA, and 19 included information on the user profile. Context was covered by 17 ontologies (physical context, n = 12; temporal context, n = 14; social context: n = 5). Ontologies met an average of 4.3 out of 12 quality criteria. No ontology met all quality criteria. Discussion: This review did not identify a single comprehensive ontology of PA that allowed reuse. Nonetheless, several ontologies may serve as a good starting point for the promotion of PA. We provide several recommendations about the identification, evaluation, and adaptation of ontologies for their further development and use.publishersversionpublishe

    Detail or uncertainty? Applying global sensitivity analysis to strike a balance in energy system models

    Get PDF
    Energy systems modellers often resort to simplified system representations and deterministic model formulations (i.e., not considering uncertainty) to preserve computational tractability. However, reduced levels of detail and neglected uncertainties can both lead to sub-optimal system designs. Herein, we present a novel method that quantitatively compares the impact of detail and uncertainty to guide model development and help prioritisation of the limited computational resources. By considering modelling choices as an additional ‘uncertain’ parameter in a global sensitivity analysis, the method determines their qualitative ranking against conventional input parameters. As a case study, the method is applied to a peer-reviewed heat decarbonisation model for the United Kingdom with the objective of assessing the importance of spatial resolution. The results show that while for the optimal total system cost the impact of spatial resolution is negligible, it is the most important factor determining the capacities of electricity, gas and heat networks

    Re-prioritizing climate services for agriculture: Insights from Bangladesh

    Get PDF
    Considerable progress has been made in establishing climate service capabilities over the last few decades, but the gap between the resulting services and national needs remains large. Using climate services for agriculture in Bangladesh as a case study example, we highlight mismatches between local needs on the one hand, and international initiatives that have focused largely on prediction on the other, and we make suggestions for addressing such mismatches in similar settings. To achieve greater benefit at the national level, there should be a stronger focus on addressing important preliminaries for building services. These preliminaries include the identification of priorities, the definition of responsibilities and expectations, the development of climate services skills, and the construction of a high-quality and easily usable national climate record. Once appropriate institutional, human resources and data infrastructure are in place, the implementation of a climate monitoring and watch system would form a more logical basis for initial climate service implementation than attempting to promote sub-seasonal to seasonal climate forecasting, especially when and where the inherent predictability is limited at best. When and where forecasting at these scales is viable, efforts should focus on defining and predicting high-impact events important for decision making, rather than on simple seasonal aggregates that often correlate poorly with outcomes. Some such forecasts may be more skillful than the 3- to 4-month seasonal aggregates that have become the internationally adopted standard. By establishing a firm foundation for climate services within National Meteorological Services, there is a greater chance that individual climate service development initiatives will be sustainable after their respective project lifetimes

    Categories and foundational ontology: A medieval tutorial

    Get PDF
    Foundational ontologies, central constructs in ontological investigations and engineering alike, are based on ontological categories. Firstly proposed by Aristotle as the very ur- elements from which the whole of reality can be derived, they are not easy to identify, let alone partition and/or hierarchize; in particular, the question of their number poses serious challenges. The late medieval philosopher Dietrich of Freiberg wrote around 1286 a tutorial that can help us today with this exceedingly difficult task. In this paper, I discuss ontological categories and their importance for foundational ontologies from both the contemporary perspective and the original Aristotelian viewpoint, I provide the translation from the Latin into English of Dietrich's De origine II with an introductory elaboration, and I extract a foundational ontology–that is in fact a single-category one–from this text rooted in Dietrich's specification of types of subjecthood and his conception of intentionality as causal operation

    Toward Optimization of Medical Therapies with a Little Help from Knowledge Management

    Get PDF
    This chapter emphasizes the importance of identifying and managing knowledge from Informally Structured Domains, especially in the medical field, where very short and repeated serial measurements are often present. This information is made up of attributes of both patients and their treatments that influence their state of health and usually includes measurements of various parameters taken at different times during the duration of treatment and usually after the application of the therapeutic resource. The chapter communicates the use of the KDSM methodology through a case study and the importance of paying attention to the characteristics of the domain to perform appropriate knowledge management in the domain

    Data-to-text generation with neural planning

    Get PDF
    In this thesis, we consider the task of data-to-text generation, which takes non-linguistic structures as input and produces textual output. The inputs can take the form of database tables, spreadsheets, charts, and so on. The main application of data-to-text generation is to present information in a textual format which makes it accessible to a layperson who may otherwise find it problematic to understand numerical figures. The task can also automate routine document generation jobs, thus improving human efficiency. We focus on generating long-form text, i.e., documents with multiple paragraphs. Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or its variants. These models generate fluent (but often imprecise) text and perform quite poorly at selecting appropriate content and ordering it coherently. This thesis focuses on overcoming these issues by integrating content planning with neural models. We hypothesize data-to-text generation will benefit from explicit planning, which manifests itself in (a) micro planning, (b) latent entity planning, and (c) macro planning. Throughout this thesis, we assume the input to our generator are tables (with records) in the sports domain. And the output are summaries describing what happened in the game (e.g., who won/lost, ..., scored, etc.). We first describe our work on integrating fine-grained or micro plans with data-to-text generation. As part of this, we generate a micro plan highlighting which records should be mentioned and in which order, and then generate the document while taking the micro plan into account. We then show how data-to-text generation can benefit from higher level latent entity planning. Here, we make use of entity-specific representations which are dynam ically updated. The text is generated conditioned on entity representations and the records corresponding to the entities by using hierarchical attention at each time step. We then combine planning with the high level organization of entities, events, and their interactions. Such coarse-grained macro plans are learnt from data and given as input to the generator. Finally, we present work on making macro plans latent while incrementally generating a document paragraph by paragraph. We infer latent plans sequentially with a structured variational model while interleaving the steps of planning and generation. Text is generated by conditioning on previous variational decisions and previously generated text. Overall our results show that planning makes data-to-text generation more interpretable, improves the factuality and coherence of the generated documents and re duces redundancy in the output document

    The Adirondack Chronology

    Get PDF
    The Adirondack Chronology is intended to be a useful resource for researchers and others interested in the Adirondacks and Adirondack history.https://digitalworks.union.edu/arlpublications/1000/thumbnail.jp

    The Role of Vocabularies in the Age of Data: The Question of Research Data

    Get PDF
    Objective: This paper discusses the role of vocabularies in addressing the issues associated with Big Data. Methodology: The materials used are definitions of Big Data found in literature, standards, and technologies used in the Semantic Web and Linked Open Data, as well as the use case of a research dataset; we use the conceptual bases of semiotics and ontology to analyze the role of vocabularies in knowledge organization (KO) in assigning subjects to documents as a special, limited, use case that may be expanded within such context. Results: We develop and expand the conception of data as an artificial, intentional construction that represents a property of an entity within a specific domain and serving as the essential component of the Big Data. We present a comprehensive conceptualization of semantic expressivity and use it to classify the different vocabularies. We suggest and specify features to vocabularies that may be used within the context of the Semantic Web and the Linked Open Data to assign machine-processable semantics to Big Data. We identify computational ontologies as a type of knowledge organization system with a higher degree of semantic expressivity. It is suggested that such themes should be incorporated into professional qualifications in KO
    corecore