1,570 research outputs found
Content-specific auditing of a large scale anatomy ontology
Biomedical ontologies are envisioned to be usable in a range of research and clinical applications. The requirements for such uses include formal consistency, adequacy of coverage, and possibly other domain specific constraints. In this report we describe a case study that illustrates how application specific requirements may be used to identify modeling problems as well as data entry errors in ontology building and evolution. We have begun a project to use the UW Foundational Model of Anatomy (FMA) in a clinical application in radiation therapy planning. This application focuses mainly (but not exclusively) on the representation of the lymphatic system in the FMA, in order to predict the spread of tumor cells to regional metastatic sites. This application requires that the downstream relations associated with lymphatic system components must only be to other lymphatic chains or vessels, must be at the appropriate level of granularity, and that every path through the lymphatic system must terminate at one of the two well known trunks of the lymphatic system. It is possible through a programmable query interface to the FMA to write small programs that systematically audit the FMA for compliance with these constraints. We report on the design of some of these programs, and the results we obtained by applying them to the lymphatic system. The algorithms and approach are generalizable to other network organ systems in the FMA such as arteries and veins. In addition to illustrating exact constraint checking methods, this work illustrates how the details of an application may reflect back a requirement to revise the design of the ontology itself
Recommended from our members
Policies and People: A Review of Neoliberalism and Educational Technologies in P-12 Education Research
Accountability regimes, value added, vouchers—it is difficult to ignore the evidence of market-based rationalities in global discourses around education. Such rationalities rely heavily on Information and Communications Technologies (ICTs) for their propagation and maintenance under the guise of educational technologies, or ed-tech. The purpose of this literature review is to examine educational research focused on the role ICTs have played in the neoliberalization of education across the globe. The author contends that future inquiry needs to substantiate the broad claims about the pernicious effects of neoliberalized educational technologies by engaging more directly with those most affected: teachers and students
Corporate Smart Content Evaluation
Nowadays, a wide range of information sources are available due to the
evolution of web and collection of data. Plenty of these information are
consumable and usable by humans but not understandable and processable by
machines. Some data may be directly accessible in web pages or via data feeds,
but most of the meaningful existing data is hidden within deep web databases
and enterprise information systems. Besides the inability to access a wide
range of data, manual processing by humans is effortful, error-prone and not
contemporary any more. Semantic web technologies deliver capabilities for
machine-readable, exchangeable content and metadata for automatic processing
of content. The enrichment of heterogeneous data with background knowledge
described in ontologies induces re-usability and supports automatic processing
of data. The establishment of “Corporate Smart Content” (CSC) - semantically
enriched data with high information content with sufficient benefits in
economic areas - is the main focus of this study. We describe three actual
research areas in the field of CSC concerning scenarios and datasets
applicable for corporate applications, algorithms and research. Aspect-
oriented Ontology Development advances modular ontology development and
partial reuse of existing ontological knowledge. Complex Entity Recognition
enhances traditional entity recognition techniques to recognize clusters of
related textual information about entities. Semantic Pattern Mining combines
semantic web technologies with pattern learning to mine for complex models by
attaching background knowledge. This study introduces the afore-mentioned
topics by analyzing applicable scenarios with economic and industrial focus,
as well as research emphasis. Furthermore, a collection of existing datasets
for the given areas of interest is presented and evaluated. The target
audience includes researchers and developers of CSC technologies - people
interested in semantic web features, ontology development, automation,
extracting and mining valuable information in corporate environments. The aim
of this study is to provide a comprehensive and broad overview over the three
topics, give assistance for decision making in interesting scenarios and
choosing practical datasets for evaluating custom problem statements. Detailed
descriptions about attributes and metadata of the datasets should serve as
starting point for individual ideas and approaches
Trust, Accountability, and Autonomy in Knowledge Graph-based AI for Self-determination
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering
intelligent decision-making and a wide range of Artificial Intelligence (AI)
services across major corporations such as Google, Walmart, and AirBnb. KGs
complement Machine Learning (ML) algorithms by providing data context and
semantics, thereby enabling further inference and question-answering
capabilities. The integration of KGs with neuronal learning (e.g., Large
Language Models (LLMs)) is currently a topic of active research, commonly named
neuro-symbolic AI. Despite the numerous benefits that can be accomplished with
KG-based AI, its growing ubiquity within online services may result in the loss
of self-determination for citizens as a fundamental societal issue. The more we
rely on these technologies, which are often centralised, the less citizens will
be able to determine their own destinies. To counter this threat, AI
regulation, such as the European Union (EU) AI Act, is being proposed in
certain regions. The regulation sets what technologists need to do, leading to
questions concerning: How can the output of AI systems be trusted? What is
needed to ensure that the data fuelling and the inner workings of these
artefacts are transparent? How can AI be made accountable for its
decision-making? This paper conceptualises the foundational topics and research
pillars to support KG-based AI for self-determination. Drawing upon this
conceptual framework, challenges and opportunities for citizen
self-determination are illustrated and analysed in a real-world scenario. As a
result, we propose a research agenda aimed at accomplishing the recommended
objectives
- …