16 research outputs found

    Understanding Orientation and Mobility learning and teaching for primary students with vision impairment: a qualitative inquiry

    Get PDF
    Orientation and Mobility is a uniquely crafted pedagogical practice blending specific microteaching skills to enable students with vision impairment to achieve functional interpretation of extra-personal and peri-personal space. Linked to student wellbeing, social participation, employment and self-determination, Orientation and Mobility is a cornerstone of equity and access for students with vision impairment. Despite this, in mainstream primary education little is known about Orientation and Mobility learning and teaching and how it aligns with the Australian Curriculum. Orientation and Mobility learning and teaching is examined from the perspectives of three female primary school students with vision impairment, a parent, a teacher, the researcher, and a panel of Orientation and Mobility specialists. These perspectives are interwoven with a detailed reflexive interrogation of the Orientation and Mobility lessons over one school semester within the contexts of the Far North and North Queensland Department of Education regions and the Australian Curriculum. This study explores how one Queensland Orientation and Mobility teacher, the researcher, explicitly communicates nonvisual, visual, tactile, and auditory concepts to primary school students with vision impairment. Drawing on Bronfenbrenner's bioecological systems theory, the Orientation and Mobility learning experiences are captured through an interpretative methodology comprising narrative inquiry and autoethnography, both underpinned by hermeneutic phenomenology. Insider researcher data are gathered from semi structured interviews, online panel responses, and audio recordings of the Orientation and Mobility lessons. Autoethnographic field notes, document materials, and reflexive teaching journals are used to support the thematic and discourse analysis. Results confirm that for the non-expert participants there was a substantial lack of awareness of the impact of vision impairment on learning and development, and the potential contribution of Orientation and Mobility. Systemic and cultural barriers to equitable inclusive education for these North and Far North Department of Education students with vision impairment were uncovered. Orientation and Mobility learning and teaching was clearly shown to overlap with and embed content from the Australian Curriculum. A key finding was the isolation of a core set of micro-teaching skills pertinent to Orientation and Mobility learning and teaching. These skills were identified as: Orientation and Mobility teacher attention to dialogic language and feedback, extended interaction wait times, and shared attention to spatial and contextual environments within the Orientation and Mobility lesson. As this skill set can be used to design Orientation and Mobility learning and teaching experiences that explicitly scaffold the development of non-visual, visual, tactile, auditory, and kinaesthetic pre-cursor concepts, it was given the appropriated name of practice architecture. An important practical outcome of the research was the formulation of an ontogenetic model of Orientation and Mobility learning and teaching. This model, which closely follows the natural development of each student with vision impairment, may serve as a tool that enables teachers to more systematically chart the biophysical attributes of the student with vision impairment. It thereby provides a learning and teaching framework for designing interactions with students with vision impairment. The ontogenetic framework has the potential to facilitate greater integration of what–and–how learning occurs in Orientation and Mobility with what–and–how learning might occur in the regular classroom

    Moving towards the semantic web: enabling new technologies through the semantic annotation of social contents.

    Get PDF
    La Web Social ha causat un creixement exponencial dels continguts disponibles deixant enormes quantitats de recursos textuals electrònics que sovint aclaparen els usuaris. Aquest volum d’informació és d’interès per a la comunitat de mineria de dades. Els algorismes de mineria de dades exploten característiques de les entitats per tal de categoritzar-les, agrupar-les o classificar-les segons la seva semblança. Les dades per si mateixes no aporten cap mena de significat: han de ser interpretades per esdevenir informació. Els mètodes tradicionals de mineria de dades no tenen com a objectiu “entendre” el contingut d’un recurs, sinó que extreuen valors numèrics els quals esdevenen models en aplicar-hi càlculs estadístics, que només cobren sentit sota l’anàlisi manual d’un expert. Els darrers anys, motivat per la Web Semàntica, molts investigadors han proposat mètodes semàntics de classificació de dades capaços d’explotar recursos textuals a nivell conceptual. Malgrat això, normalment aquests mètodes depenen de recursos anotats prèviament per poder interpretar semànticament el contingut d’un document. L’ús d’aquests mètodes està estretament relacionat amb l’associació de dades i el seu significat. Aquest treball es centra en el desenvolupament d’una metodologia genèrica capaç de detectar els trets més rellevants d’un recurs textual descobrint la seva associació semàntica, es a dir, enllaçant-los amb conceptes modelats a una ontologia, i detectant els principals temes de discussió. Els mètodes proposats són no supervisats per evitar el coll d’ampolla generat per l’anotació manual, independents del domini (aplicables a qualsevol àrea de coneixement) i flexibles (capaços d’analitzar recursos heterogenis: documents textuals o documents semi-estructurats com els articles de la Viquipèdia o les publicacions de Twitter). El treball ha estat avaluat en els àmbits turístic i mèdic. Per tant, aquesta dissertació és un primer pas cap a l'anotació semàntica automàtica de documents necessària per possibilitar el camí cap a la visió de la Web Semàntica.La Web Social ha provocado un crecimiento exponencial de los contenidos disponibles, dejando enormes cantidades de recursos electrónicos que a menudo abruman a los usuarios. Tal volumen de información es de interés para la comunidad de minería de datos. Los algoritmos de minería de datos explotan características de las entidades para categorizarlas, agruparlas o clasificarlas según su semejanza. Los datos por sí mismos no aportan ningún significado: deben ser interpretados para convertirse en información. Los métodos tradicionales no tienen como objetivo "entender" el contenido de un recurso, sino que extraen valores numéricos que se convierten en modelos tras aplicar cálculos estadísticos, los cuales cobran sentido bajo el análisis manual de un experto. Actualmente, motivados por la Web Semántica, muchos investigadores han propuesto métodos semánticos de clasificación de datos capaces de explotar recursos textuales a nivel conceptual. Sin embargo, generalmente estos métodos dependen de recursos anotados previamente para poder interpretar semánticamente el contenido de un documento. El uso de estos métodos está estrechamente relacionado con la asociación de datos y su significado. Este trabajo se centra en el desarrollo de una metodología genérica capaz de detectar los rasgos más relevantes de un recurso textual descubriendo su asociación semántica, es decir, enlazándolos con conceptos modelados en una ontología, y detectando los principales temas de discusión. Los métodos propuestos son no supervisados para evitar el cuello de botella generado por la anotación manual, independientes del dominio (aplicables a cualquier área de conocimiento) y flexibles (capaces de analizar recursos heterogéneos: documentos textuales o documentos semi-estructurados, como artículos de la Wikipedia o publicaciones de Twitter). El trabajo ha sido evaluado en los ámbitos turístico y médico. Esta disertación es un primer paso hacia la anotación semántica automática de documentos necesaria para posibilitar el camino hacia la visión de la Web Semántica.Social Web technologies have caused an exponential growth of the documents available through the Web, making enormous amounts of textual electronic resources available. Users may be overwhelmed by such amount of contents and, therefore, the automatic analysis and exploitation of all this information is of interest to the data mining community. Data mining algorithms exploit features of the entities in order to characterise, group or classify them according to their resemblance. Data by itself does not carry any meaning; it needs to be interpreted to convey information. Classical data analysis methods did not aim to “understand” the content and the data were treated as meaningless numbers and statistics were calculated on them to build models that were interpreted manually by human domain experts. Nowadays, motivated by the Semantic Web, many researchers have proposed semantic-grounded data classification and clustering methods that are able to exploit textual data at a conceptual level. However, they usually rely on pre-annotated inputs to be able to semantically interpret textual data such as the content of Web pages. The usability of all these methods is related to the linkage between data and its meaning. This work focuses on the development of a general methodology able to detect the most relevant features of a particular textual resource finding out their semantics (associating them to concepts modelled in ontologies) and detecting its main topics. The proposed methods are unsupervised (avoiding the manual annotation bottleneck), domain-independent (applicable to any area of knowledge) and flexible (being able to deal with heterogeneous resources: raw text documents, semi-structured user-generated documents such Wikipedia articles or short and noisy tweets). The methods have been evaluated in different fields (Tourism, Oncology). This work is a first step towards the automatic semantic annotation of documents, needed to pave the way towards the Semantic Web vision

    A study of children's misconceptions in science and the effectiveness of a related programme of teacher training in Pakistan

    Get PDF
    The study comprised an investigation of children's misconceptions in science with the intention this should provide a base for further research linked to a wider programme of the improvement of science education in Pakistan.The investigation was carried out on the concepts of Force, Energy, Light, Work, and Electric Current using Interview-About-Instances approach. It was discovered that children in Pakistan hold misconceptions similar to those held by children in other parts of the world. Then, three groups of science teachers were tested in the concept Force after giving them different levels of information about students' misconceptions. It was found that science teachers also hold misconceptions and performance of the three groups was almost equal on the test.Next, the teachers of the sample students were trained to reteach three concepts: Force, Energy, and Light. After re-teaching, students were retested using both IAI and multiple-choice instruments. The results showed that pupils' misconceptions persist despite re-teaching.Then, in order to confirm or refute these results more widely, a larger number of teachers and students were involved. The purpose of this part of the study was to discover if in-depth teacher training can lead to more effective teaching. A special teacher training programme was developed. The selected teachers were randomly distributed into three groups. Group A was given in-depth training, whilst group B was given simple training. Group C served as a control group. After training, teachers retaught the concepts Force, Energy and Light in their own schools. Students were tested using multiple choice tests.It was found that group A was significantly different from groups B and C together only in one subset of test items in the concept Force. Also, the mean scores of students in group A in each test were found to be higher than those of students in groups B and C. From these results it is argued that programmes can be organised for the training of science teachers to tackle effectively problems arising from children's misconceptions. Finally, the study proposes a research project with an overall purpose of improvement of science education in Pakistan

    Fuzzy natural language similarity measures through computing with words

    Get PDF
    A vibrant area of research is the understanding of human language by machines to engage in conversation with humans to achieve set goals. Human language is naturally fuzzy by nature, with words meaning different things to different people, depending on the context. Fuzzy words are words with a subjective meaning, typically used in everyday human natural language dialogue and often ambiguous and vague in meaning and dependent on an individual’s perception. Fuzzy Sentence Similarity Measures (FSSM) are algorithms that can compare two or more short texts which contain fuzzy words and return a numeric measure of similarity of meaning between them. The motivation for this research is to create a new FSSM called FUSE (FUzzy Similarity mEasure). FUSE is an ontology-based similarity measure that uses Interval Type-2 Fuzzy Sets to model relationships between categories of human perception-based words. Four versions of FUSE (FUSE_1.0 – FUSE_4.0) have been developed, investigating the presence of linguistic hedges, the expansion of fuzzy categories and their use in natural language, incorporating logical operators such as ‘not’ and the introduction of the fuzzy influence factor. FUSE has been compared to several state-of-the-art, traditional semantic similarity measures (SSM’s) which do not consider the presence of fuzzy words. FUSE has also been compared to the only published FSSM, FAST (Fuzzy Algorithm for Similarity Testing), which has a limited dictionary of fuzzy words and uses Type-1 Fuzzy Sets to model relationships between categories of human perception-based words. Results have shown FUSE is able to improve on the limitations of traditional SSM’s and the FAST algorithm by achieving a higher correlation with the average human rating (AHR) compared to traditional SSM’s and FAST using several published and gold-standard datasets. To validate FUSE, in the context of a real-world application, versions of the algorithm were incorporated into a simple Question & Answer (Q&A) dialogue system (DS), referred to as FUSION, to evaluate the improvement of natural language understanding. FUSION was tested on two different scenarios using human participants and results compared to a traditional SSM known as STASIS. Results of the DS experiments showed a True rating of 88.65% compared to STASIS with an average True rating of 61.36%. Results showed that the FUSE algorithm can be used within real world applications and evaluation of the DS showed an improvement of natural language understanding, allowing semantic similarity to be calculated more accurately from natural user responses. The key contributions of this work can be summarised as follows: The development of a new methodology to model fuzzy words using Interval Type-2 fuzzy sets; leading to the creation of a fuzzy dictionary for nine fuzzy categories, a useful resource which can be used by other researchers in the field of natural language processing and Computing with Words with other fuzzy applications such as semantic clustering. The development of a FSSM known as FUSE, which was expanded over four versions, investigating the incorporation of linguistic hedges, the expansion of fuzzy categories and their use in natural language, inclusion of logical operators such as ‘not’ and the introduction of the fuzzy influence factor. Integration of the FUSE algorithm into a simple Q&A DS referred to as FUSION demonstrated that FSSM can be used in a real-world practical implementation, therefore making FUSE and its fuzzy dictionary generalisable to other applications

    Semantic Similarity of Spatial Scenes

    Get PDF
    The formalization of similarity in spatial information systems can unleash their functionality and contribute technology not only useful, but also desirable by broad groups of users. As a paradigm for information retrieval, similarity supersedes tedious querying techniques and unveils novel ways for user-system interaction by naturally supporting modalities such as speech and sketching. As a tool within the scope of a broader objective, it can facilitate such diverse tasks as data integration, landmark determination, and prediction making. This potential motivated the development of several similarity models within the geospatial and computer science communities. Despite the merit of these studies, their cognitive plausibility can be limited due to neglect of well-established psychological principles about properties and behaviors of similarity. Moreover, such approaches are typically guided by experience, intuition, and observation, thereby often relying on more narrow perspectives or restrictive assumptions that produce inflexible and incompatible measures. This thesis consolidates such fragmentary efforts and integrates them along with novel formalisms into a scalable, comprehensive, and cognitively-sensitive framework for similarity queries in spatial information systems. Three conceptually different similarity queries at the levels of attributes, objects, and scenes are distinguished. An analysis of the relationship between similarity and change provides a unifying basis for the approach and a theoretical foundation for measures satisfying important similarity properties such as asymmetry and context dependence. The classification of attributes into categories with common structural and cognitive characteristics drives the implementation of a small core of generic functions, able to perform any type of attribute value assessment. Appropriate techniques combine such atomic assessments to compute similarities at the object level and to handle more complex inquiries with multiple constraints. These techniques, along with a solid graph-theoretical methodology adapted to the particularities of the geospatial domain, provide the foundation for reasoning about scene similarity queries. Provisions are made so that all methods comply with major psychological findings about people’s perceptions of similarity. An experimental evaluation supplies the main result of this thesis, which separates psychological findings with a major impact on the results from those that can be safely incorporated into the framework through computationally simpler alternatives

    International Workshop on Description Logics : Bonn, May 28/29, 1994

    Get PDF
    This collection of papers forms the permanent record of the 1994 Description Logic Workshop, that was held at the Gustav Stresemann Institut in Bonn, Germany on 28 and 29 May 1994, immediately after the Fourth International Conference on Principles of Knowledge Representation and Reasoning. The workshop was set up to be as informal as possible, so this collection cannot hope to capture the discussions associated with the workshop. However, we hope that it will serve to remind participants of their discussion at the workshop, and provide non-participants with indications of the topics that were discussed at the workshop. The workshop consisted of seven regular sessions and one panel session. Each regular session had about four short presentations on a single theme, but also had considerable time reserved for discussion. The themes of the sessions were Foundations of Description Logics, Architecture of Description Logics and Description Logic Systems, Language Extensions, Expanding Description Logics, General Applications of Description Logics, Natural Language Applications of Description Logics, Connections between Description Logics and Databases, and the Future of Description Logics and Description Logic Systems. The session on Foundations of Description Logics concentrated on computational properties of description logics, correspondences between description logics and other formalisms, and on semantics of description logics, Similarly, there is discussion on how to develop tractable desription logics, for some notion of tractable, and whether it is useful to worry about achieving tractability at all. Several of the participants argued in favour of a very expressive description logic. This obviously precludes tractability or even decidability of complete reasoning. Klaus Schild proposed that for some purposes one could employ "model checking" (i .e., a closed world assumption) instead of "theorem proving," and has shown that this is still tractable for very large languages. Maurizio Lenzerini's opinion was that it is important to have decidable languages. Tractability cannot be achieved in several application areas because there one needs very expressive constructs: e.g., axioms, complex role constructors, and cycles with fixed-point semantics. For Bob MacGregor, not even decidability is an issue since he claims that Loom's incomplete reasoner is sufficient for his applications. The discussion addressed the question of whether there is still need for foundations, and whether the work on foundation done until now really solved the problems that the designers of early DL systems had. Both questions were mostly answered in the affirmative, with the caveat that new research on foundations should make sure that it is concerned with "real" problems, and not just generates new problems. In the session on Architecture of Description Logics and Description Logic Systems the participants considered different ways of putting together description logics and description logic systems. One way of doing this is to have a different kind of inference strategy for description logics, such as one based on intuitionistic logics or one based directly on rules of inference-thus allowing variant systems. Another way of modifying description logic systems is to divide them up in different ways, such as making a terminology consist of a schema portion and a view portion. Some discussion in this session concerned whether architectures should be influenced by application areas, or even by particular applications. There was considerable discussion at the workshop on how Description Logics should be extended or expanded to make them more useful. There are several methods to do this. The first is to extend the language of descriptions, e.g ., to represent n-ary relations, temporal information, or whole-part relationships, all of which were discussed at the workshop. The second is to add in another kind of reasoning, such as default reasoning, while still keeping the general framework of description logic reasoning. The third is to incorporate descriptions or description-like constructs in a larger reasoner, such as a first order reasoner. This was the approach taken in OMEGA and is the approach being taken in the Loom project. There have been many extensions of the first two kinds proposed for description logics, including several presented at the workshop. One quest ion discussed at the workshop was whether these extensions fit in well with the philosophy of description logic. Another question was whether the presence of many proposals for extensions means that description logics are easy to expand, or that description logics are inadequate representation formalisms? The general consensus was that description logics adequately capture a certain kind of core reasoning and that they lend themselves to incorporation with other kinds of reasoning. Care must be taken, however, to keep the extended versions true to the goals of description logics. The sessions on Applications of Description Logics had presentations on applications of description logics in various areas, including configuration, tutoring, natural language processing, and domain modeling. Most of these applications are research applications, funded by government research programs. There was discussion of what is needed to have more fielded applications of description logics. The session on Connections between Description Logics and Databases considered three kinds of connections between Description Logics and Databases: 1. using Description Logics for expressing database schemas, including local schemas, integrated schemas, and views, integrity constraints, and queries; 2. using Description Logic reasoning for various database-related reasoning, including schema integration and validation, and query optimization, and query validation and organization; and 3. making Description Logic reasoners more like Database Mangagement Systems via optimization. All three of these connections are being actively investigated by the description logic community. The panel session on the Future of Description Logics and Description Logic Systems discussed where the future of description logics will lie. There seems to be a consensus that description logics must forge tighter connections with other formalisms, such as databases or object-oriented systems. In this way, perhaps, description logics will find more real applications

    International Workshop on Description Logics : Bonn, May 28/29, 1994

    Get PDF
    This collection of papers forms the permanent record of the 1994 Description Logic Workshop, that was held at the Gustav Stresemann Institut in Bonn, Germany on 28 and 29 May 1994, immediately after the Fourth International Conference on Principles of Knowledge Representation and Reasoning. The workshop was set up to be as informal as possible, so this collection cannot hope to capture the discussions associated with the workshop. However, we hope that it will serve to remind participants of their discussion at the workshop, and provide non-participants with indications of the topics that were discussed at the workshop. The workshop consisted of seven regular sessions and one panel session. Each regular session had about four short presentations on a single theme, but also had considerable time reserved for discussion. The themes of the sessions were Foundations of Description Logics, Architecture of Description Logics and Description Logic Systems, Language Extensions, Expanding Description Logics, General Applications of Description Logics, Natural Language Applications of Description Logics, Connections between Description Logics and Databases, and the Future of Description Logics and Description Logic Systems. The session on Foundations of Description Logics concentrated on computational properties of description logics, correspondences between description logics and other formalisms, and on semantics of description logics, Similarly, there is discussion on how to develop tractable desription logics, for some notion of tractable, and whether it is useful to worry about achieving tractability at all. Several of the participants argued in favour of a very expressive description logic. This obviously precludes tractability or even decidability of complete reasoning. Klaus Schild proposed that for some purposes one could employ "model checking" (i .e., a closed world assumption) instead of "theorem proving," and has shown that this is still tractable for very large languages. Maurizio Lenzerini's opinion was that it is important to have decidable languages. Tractability cannot be achieved in several application areas because there one needs very expressive constructs: e.g., axioms, complex role constructors, and cycles with fixed-point semantics. For Bob MacGregor, not even decidability is an issue since he claims that Loom's incomplete reasoner is sufficient for his applications. The discussion addressed the question of whether there is still need for foundations, and whether the work on foundation done until now really solved the problems that the designers of early DL systems had. Both questions were mostly answered in the affirmative, with the caveat that new research on foundations should make sure that it is concerned with "real" problems, and not just generates new problems. In the session on Architecture of Description Logics and Description Logic Systems the participants considered different ways of putting together description logics and description logic systems. One way of doing this is to have a different kind of inference strategy for description logics, such as one based on intuitionistic logics or one based directly on rules of inference-thus allowing variant systems. Another way of modifying description logic systems is to divide them up in different ways, such as making a terminology consist of a schema portion and a view portion. Some discussion in this session concerned whether architectures should be influenced by application areas, or even by particular applications. There was considerable discussion at the workshop on how Description Logics should be extended or expanded to make them more useful. There are several methods to do this. The first is to extend the language of descriptions, e.g ., to represent n-ary relations, temporal information, or whole-part relationships, all of which were discussed at the workshop. The second is to add in another kind of reasoning, such as default reasoning, while still keeping the general framework of description logic reasoning. The third is to incorporate descriptions or description-like constructs in a larger reasoner, such as a first order reasoner. This was the approach taken in OMEGA and is the approach being taken in the Loom project. There have been many extensions of the first two kinds proposed for description logics, including several presented at the workshop. One quest ion discussed at the workshop was whether these extensions fit in well with the philosophy of description logic. Another question was whether the presence of many proposals for extensions means that description logics are easy to expand, or that description logics are inadequate representation formalisms? The general consensus was that description logics adequately capture a certain kind of core reasoning and that they lend themselves to incorporation with other kinds of reasoning. Care must be taken, however, to keep the extended versions true to the goals of description logics. The sessions on Applications of Description Logics had presentations on applications of description logics in various areas, including configuration, tutoring, natural language processing, and domain modeling. Most of these applications are research applications, funded by government research programs. There was discussion of what is needed to have more fielded applications of description logics. The session on Connections between Description Logics and Databases considered three kinds of connections between Description Logics and Databases: 1. using Description Logics for expressing database schemas, including local schemas, integrated schemas, and views, integrity constraints, and queries; 2. using Description Logic reasoning for various database-related reasoning, including schema integration and validation, and query optimization, and query validation and organization; and 3. making Description Logic reasoners more like Database Mangagement Systems via optimization. All three of these connections are being actively investigated by the description logic community. The panel session on the Future of Description Logics and Description Logic Systems discussed where the future of description logics will lie. There seems to be a consensus that description logics must forge tighter connections with other formalisms, such as databases or object-oriented systems. In this way, perhaps, description logics will find more real applications

    International Workshop on Description Logics : Bonn, May 28/29, 1994

    Get PDF
    This collection of papers forms the permanent record of the 1994 Description Logic Workshop, that was held at the Gustav Stresemann Institut in Bonn, Germany on 28 and 29 May 1994, immediately after the Fourth International Conference on Principles of Knowledge Representation and Reasoning. The workshop was set up to be as informal as possible, so this collection cannot hope to capture the discussions associated with the workshop. However, we hope that it will serve to remind participants of their discussion at the workshop, and provide non-participants with indications of the topics that were discussed at the workshop. The workshop consisted of seven regular sessions and one panel session. Each regular session had about four short presentations on a single theme, but also had considerable time reserved for discussion. The themes of the sessions were Foundations of Description Logics, Architecture of Description Logics and Description Logic Systems, Language Extensions, Expanding Description Logics, General Applications of Description Logics, Natural Language Applications of Description Logics, Connections between Description Logics and Databases, and the Future of Description Logics and Description Logic Systems. The session on Foundations of Description Logics concentrated on computational properties of description logics, correspondences between description logics and other formalisms, and on semantics of description logics, Similarly, there is discussion on how to develop tractable desription logics, for some notion of tractable, and whether it is useful to worry about achieving tractability at all. Several of the participants argued in favour of a very expressive description logic. This obviously precludes tractability or even decidability of complete reasoning. Klaus Schild proposed that for some purposes one could employ "model checking" (i .e., a closed world assumption) instead of "theorem proving," and has shown that this is still tractable for very large languages. Maurizio Lenzerini\u27s opinion was that it is important to have decidable languages. Tractability cannot be achieved in several application areas because there one needs very expressive constructs: e.g., axioms, complex role constructors, and cycles with fixed-point semantics. For Bob MacGregor, not even decidability is an issue since he claims that Loom\u27s incomplete reasoner is sufficient for his applications. The discussion addressed the question of whether there is still need for foundations, and whether the work on foundation done until now really solved the problems that the designers of early DL systems had. Both questions were mostly answered in the affirmative, with the caveat that new research on foundations should make sure that it is concerned with "real" problems, and not just generates new problems. In the session on Architecture of Description Logics and Description Logic Systems the participants considered different ways of putting together description logics and description logic systems. One way of doing this is to have a different kind of inference strategy for description logics, such as one based on intuitionistic logics or one based directly on rules of inference-thus allowing variant systems. Another way of modifying description logic systems is to divide them up in different ways, such as making a terminology consist of a schema portion and a view portion. Some discussion in this session concerned whether architectures should be influenced by application areas, or even by particular applications. There was considerable discussion at the workshop on how Description Logics should be extended or expanded to make them more useful. There are several methods to do this. The first is to extend the language of descriptions, e.g ., to represent n-ary relations, temporal information, or whole-part relationships, all of which were discussed at the workshop. The second is to add in another kind of reasoning, such as default reasoning, while still keeping the general framework of description logic reasoning. The third is to incorporate descriptions or description-like constructs in a larger reasoner, such as a first order reasoner. This was the approach taken in OMEGA and is the approach being taken in the Loom project. There have been many extensions of the first two kinds proposed for description logics, including several presented at the workshop. One quest ion discussed at the workshop was whether these extensions fit in well with the philosophy of description logic. Another question was whether the presence of many proposals for extensions means that description logics are easy to expand, or that description logics are inadequate representation formalisms? The general consensus was that description logics adequately capture a certain kind of core reasoning and that they lend themselves to incorporation with other kinds of reasoning. Care must be taken, however, to keep the extended versions true to the goals of description logics. The sessions on Applications of Description Logics had presentations on applications of description logics in various areas, including configuration, tutoring, natural language processing, and domain modeling. Most of these applications are research applications, funded by government research programs. There was discussion of what is needed to have more fielded applications of description logics. The session on Connections between Description Logics and Databases considered three kinds of connections between Description Logics and Databases: 1. using Description Logics for expressing database schemas, including local schemas, integrated schemas, and views, integrity constraints, and queries; 2. using Description Logic reasoning for various database-related reasoning, including schema integration and validation, and query optimization, and query validation and organization; and 3. making Description Logic reasoners more like Database Mangagement Systems via optimization. All three of these connections are being actively investigated by the description logic community. The panel session on the Future of Description Logics and Description Logic Systems discussed where the future of description logics will lie. There seems to be a consensus that description logics must forge tighter connections with other formalisms, such as databases or object-oriented systems. In this way, perhaps, description logics will find more real applications

    The teaching of electronics in schools and further education: a case study in curriculum change.

    Get PDF
    This case study describes the development of Electronics within the curriculum in line with how both (Reid and Walker I975, Case Studies in Curriculum Change) and (Goodson I983, School Subjects and Curriculum Change) discussed changes in terms of theories of curriculum change. Alternative definitions of the term innovation are reviewed and for the purpose of this study a definition is adopted which includes syllabus change and major changes of scale and strategy. The study gives an outline of the major theories of innovation and implementation strategy. Features of centralisation and rationalisation are described insofar as these features led to current educational initiatives. An account Is given of how Electronics developed as a topic within ‘A’ level Physics, a subject within B.E.T.E.C. (previously O.N.C/T.E.C.) and as a separate G.C.E. subject. Data on examination entries In G.C.E. and C.S.E. Electronics are presented. These data are related to the- size of L.E.A.s, the type of centre, and also to explore the viability of G.C.E./G.C.S.E. provision in Electronics. Initiatives such as M.E.P., T.V.E.I., C.P.V.E., S.S.C.R. are described as they are expected to have a significant impact on the growth of Electronics. The position of Electronics within the curriculum and its educational value are discussed. Comment is made on the Systems and Components approaches to Electronics and on the importance of project work. Teacher difficulties with project work are noted and suggestions are made on the use and range of equipment available so that a suitable teaching style may be -developed

    Similarity measures and diversity rankings for query-focused sentence extraction

    Get PDF
    Query-focused sentence extraction generally refers to an extractive approach to select a set of sentences that responds to a specific information need. It is one of the major approaches employed in multi-document summarization, focused summarization, and complex question answering. The major advantage of most extractive methods over the natural language processing (NLP) intensive methods is that they are relatively simple, theoretically sound – drawing upon several supervised and unsupervised learning techniques, and often produce equally strong empirical performance. Many research areas, including information retrieval and text mining, have recently moved toward the extractive query-focused sentence generation as its outputs have great potential to support every day‟s information seeking activities. Particularly, as more information have been created and stored online, extractive-based summarization systems may quickly utilize several ubiquitous resources, such as Google search results and social medias, to extract summaries to answer users‟ queries.This thesis explores how the performance of sentence extraction tasks can be improved to create higher quality outputs. Specifically, two major areas are investigated. First, we examine the issue of natural language variation which affects the similarity judgment of sentences. As sentences are much shorter than documents, they generally contain fewer occurring words. Moreover, the similarity notions of sentences are different than those of documents as they tend to be very specific in meanings. Thus many document-level similarity measures are likely to perform well at this level. In this work, we address these issues in two application domains. First, we present a hybrid method, utilizing both unsupervised and supervised techniques, to compute the similarity of interrogative sentences for factoid question reuse. Next, we propose a novel structural similarity measure based on sentence semantics for paraphrase identification and textual entailment recognition tasks. The empirical evaluations suggest the effectiveness of the proposed methods in improving the accuracy of sentence similarity judgments.Furthermore, we examine the effects of the proposed similarity measure in two specific sentence extraction tasks, focused summarization and complex question answering. In conjunction with the proposed similarity measure, we also explore the issues of novelty, redundancy, and diversity in sentence extraction. To that end, we present a novel approach to promote diversity of extracted sets of sentences based on the negative endorsement principle. Negative-signed edges are employed to represent a redundancy relation between sentence nodes in graphs. Then, sentences are reranked according to the long-term negative endorsements from random walk. Additionally, we propose a unified centrality ranking and diversity ranking based on the aforementioned principle. The results from a comprehensive evaluation confirm that the proposed methods perform competitively, compared to many state-of-the-art methods.Ph.D., Information Science -- Drexel University, 201
    corecore