11,786 research outputs found

    Tolerance analysis approach based on the classification of uncertainty (aleatory / epistemic)

    Get PDF
    Uncertainty is ubiquitous in tolerance analysis problem. This paper deals with tolerance analysis formulation, more particularly, with the uncertainty which is necessary to take into account into the foundation of this formulation. It presents: a brief view of the uncertainty classification: Aleatory uncertainty comes from the inherent uncertain nature and phenomena, and epistemic uncertainty comes from the lack of knowledge, a formulation of the tolerance analysis problem based on this classification, its development: Aleatory uncertainty is modeled by probability distributions while epistemic uncertainty is modeled by intervals; Monte Carlo simulation is employed for probabilistic analysis while nonlinear optimization is used for interval analysis.“AHTOLA” project (ANR-11- MONU-013

    A structured model metametadata technique to enhance semantic searching in metadata repository

    Get PDF
    This paper discusses on a novel technique for semantic searching and retrieval of information about learning materials. A novel structured metametadata model has been created to provide the foundation for a semantic search engine to extract, match and map queries to retrieve relevant results. Metametadata encapsulate metadata instances by using the properties and attributes provided by ontologies rather than describing learning objects. The use of ontological views assists the pedagogical content of metadata extracted from learning objects by using the control vocabularies as identified from the metametadata taxonomy. The use of metametadata (based on the metametadata taxonomy) supported by the ontologies have contributed towards a novel semantic searching mechanism. This research has presented a metametadata model for identifying semantics and describing learning objects in finer-grain detail that allows for intelligent and smart retrieval by automated search and retrieval software

    Towards an ontology of networked learning

    Get PDF
    Networked learning, conceived of as networks of people, informational resources and technologies, constitutes what has been termed a ‘highly interwined’ technology. In this paper we develop our earlier argument that sociotechnical networks can form the basis for a non-determinist theory of learning technology. Firstly, we argue that Kling et al’s sociotechnical interaction network (STIN) is compatible with a realist ontology, drawing on Fleetwood’s ‘ontology of the real’ and Lawson’s proposition of the social nature of the artefact in networks of ‘positioned practices’. This, we suggest, gives a more secure basis for the STIN concept, and provides a clear alternative to actor network theory (ANT)-based views of sociotechnical networks which do not distinguish between the influence of human and material agents. This also, we argue, provides an alternative way of anchoring concepts from the social informatics literature, often influenced by Giddens’ structuration theory, in ways that can help networked learning research. Secondly, we explore some potential implications of such an approach for theories of networked learning and learning more widely. In particular, we suggest a possible ontology of elements of learning technology. The use of the word ‘learning’ here is somewhat problematic, as it is routinely used rather loosely to describe changes at multiple levels but which are likely to have rather different underlying mechanisms. A more thorough ontology of learning technology would allow us to distinguish between these uses and identify potentially distinct mechanisms at play in different forms and levels of learning. Thirdly, we use this approach to explore how viewing learning technologies as sociotechnical networks helps to clarify our thinking about identities in social networking for personal, learning and professional purposes

    Towards MKM in the Large: Modular Representation and Scalable Software Architecture

    Full text link
    MKM has been defined as the quest for technologies to manage mathematical knowledge. MKM "in the small" is well-studied, so the real problem is to scale up to large, highly interconnected corpora: "MKM in the large". We contend that advances in two areas are needed to reach this goal. We need representation languages that support incremental processing of all primitive MKM operations, and we need software architectures and implementations that implement these operations scalably on large knowledge bases. We present instances of both in this paper: the MMT framework for modular theory-graphs that integrates meta-logical foundations, which forms the base of the next OMDoc version; and TNTBase, a versioned storage system for XML-based document formats. TNTBase becomes an MMT database by instantiating it with special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical Knowledge Management: MKM 201

    Ontology mapping: the state of the art

    No full text
    Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping

    From SMART to agent systems development

    Get PDF
    In order for agent-oriented software engineering to prove effective it must use principled notions of agents and enabling specification and reasoning, while still considering routes to practical implementation. This paper deals with the issue of individual agent specification and construction, departing from the conceptual basis provided by the SMART agent framework. SMART offers a descriptive specification of an agent architecture but omits consideration of issues relating to construction and control. In response, we introduce two new views to complement SMART: a behavioural specification and a structural specification which, together, determine the components that make up an agent, and how they operate. In this way, we move from abstract agent system specification to practical implementation. These three aspects are combined to create an agent construction model, actSMART, which is then used to define the AgentSpeak(L) architecture in order to illustrate the application of actSMART

    Co-creating an educational space

    Get PDF
    In this paper I generate my living educational theory as an explanation of my educational influences in learning as I research my tutoring with practitioner researchers from a variety of workplace backgrounds. I will show how I have closely inter-related the teaching learning and research processes by providing opportunities for participants to accept responsibility for their own learning and to develop their capacity as learners and researchers. My PhD enquiry ‘How am I creating a pedagogy of the unique through a web of betweenness?’ (Farren, 2006) was integral to the development of my own practice as higher education educator. I clarified the meaning of my embodied values in the course of their emergence in practice. I try to provide an educational space where individuals can create knowledge in collaboration with others. I believe dialogue is fundamental to the learning process. It is a way of opening up to questions and assumptions rather than accepting ready-made solutions. The originality of the contribution is in the constellation of values and understandings I use as explanatory principles in my explanations of educational influence. This constellation includes the unusual combination of an educational response to the flow of energy and meaning in Celtic spirituality and the educational opportunities for learning opened up by digital technology

    Challenges in Complex Systems Science

    Get PDF
    FuturICT foundations are social science, complex systems science, and ICT. The main concerns and challenges in the science of complex systems in the context of FuturICT are laid out in this paper with special emphasis on the Complex Systems route to Social Sciences. This include complex systems having: many heterogeneous interacting parts; multiple scales; complicated transition laws; unexpected or unpredicted emergence; sensitive dependence on initial conditions; path-dependent dynamics; networked hierarchical connectivities; interaction of autonomous agents; self-organisation; non-equilibrium dynamics; combinatorial explosion; adaptivity to changing environments; co-evolving subsystems; ill-defined boundaries; and multilevel dynamics. In this context, science is seen as the process of abstracting the dynamics of systems from data. This presents many challenges including: data gathering by large-scale experiment, participatory sensing and social computation, managing huge distributed dynamic and heterogeneous databases; moving from data to dynamical models, going beyond correlations to cause-effect relationships, understanding the relationship between simple and comprehensive models with appropriate choices of variables, ensemble modeling and data assimilation, modeling systems of systems of systems with many levels between micro and macro; and formulating new approaches to prediction, forecasting, and risk, especially in systems that can reflect on and change their behaviour in response to predictions, and systems whose apparently predictable behaviour is disrupted by apparently unpredictable rare or extreme events. These challenges are part of the FuturICT agenda

    Collaborative Verification-Driven Engineering of Hybrid Systems

    Full text link
    Hybrid systems with both discrete and continuous dynamics are an important model for real-world cyber-physical systems. The key challenge is to ensure their correct functioning w.r.t. safety requirements. Promising techniques to ensure safety seem to be model-driven engineering to develop hybrid systems in a well-defined and traceable manner, and formal verification to prove their correctness. Their combination forms the vision of verification-driven engineering. Often, hybrid systems are rather complex in that they require expertise from many domains (e.g., robotics, control systems, computer science, software engineering, and mechanical engineering). Moreover, despite the remarkable progress in automating formal verification of hybrid systems, the construction of proofs of complex systems often requires nontrivial human guidance, since hybrid systems verification tools solve undecidable problems. It is, thus, not uncommon for development and verification teams to consist of many players with diverse expertise. This paper introduces a verification-driven engineering toolset that extends our previous work on hybrid and arithmetic verification with tools for (i) graphical (UML) and textual modeling of hybrid systems, (ii) exchanging and comparing models and proofs, and (iii) managing verification tasks. This toolset makes it easier to tackle large-scale verification tasks
    • 

    corecore