30 research outputs found

    RESOLVING DATABASE CONSTRAINT COLLISIONS USING IIS*CASE TOOL

    Get PDF
    Integrated Information Systems*Case (IIS*Case) R.6.21 is a CASE tool that we developed to support automated database (db) schema design, based on a methodology of gradual integration of independently designed subschemas into a database schema. It provides complete intelligent support for developing db schemas and enables designers to work together and cooperate reaching the most appropriate solutions. The process of independent design of subschemas may lead to collisions in expressing the real world constraints and business rules. IIS*Case uses specialized algorithms for checking the consistency of constraints embedded in the database schema and the subschemas. IIS*Case supports designers in reviewing and validating results obtained after each step of the design process. The paper outlines the process of resolving collisions. A case study based on an imaginary production system is used to illustrate the application of IIS*Case. Different outcomes and their consequences are presented

    RESOLVING DATABASE CONSTRAINT COLLISIONS USING IIS*CASE TOOL

    Get PDF
    Integrated Information Systems*Case (IIS*Case) R.6.21 is a CASE tool that we developed to support automated database (db) schema design, based on a methodology of gradual integration of independently designed subschemas into a database schema. It provides complete intelligent support for developing db schemas and enables designers to work together and cooperate reaching the most appropriate solutions. The process of independent design of subschemas may lead to collisions in expressing the real world constraints and business rules. IIS*Case uses specialized algorithms for checking the consistency of constraints embedded in the database schema and the subschemas. IIS*Case supports designers in reviewing and validating results obtained after each step of the design process. The paper outlines the process of resolving collisions. A case study based on an imaginary production system is used to illustrate the application of IIS*Case. Different outcomes and their consequences are presented

    Developing a compositional ontology alignment framework for unifying business and engineering domains

    Get PDF
    In the context of the Semantic Web, ontologies refer to the consensual and formal description of shared concepts in a domain. Ontologies are said to be a way to aid communication between humans and machines and also between machines for agent communication. The importance of ontologies for providing a shared understanding of common domains, and as a means for data exchange at the syntactic and semantic level has increased considerably in the last years. Therefore, ontology management becomes a significant task to make distributed and heterogeneous knowledge bases available to the end users. Ontology alignment is the process where ontology from different domains can be matched and processed further together, hence sharing a common understanding of the structure of information among different people. This research starts from a comprehensive review of the current development of ontology, the concepts of ontology alignments and relevant approaches. The first motivation of this work is trying to summarise the common features of ontology alignment and identify underdevelopment areas of ontology alignment. It then works on how complex businesses can be designed and managed by semantic modelling which can help define the data and the relationships between these entities, which provides the ability to abstract different kinds of data and provides an understanding of how the data elements relate. The main contributions of this work is to develop a framework of handling an important category of ontology alignment based on the logical composition of classes, especially under a case that one class from a certain domain becomes a logic prerequisites (assumption) of another class from a different domain (commitment) which only happens if the class from the first domain becomes valid. Under this logic, previously un-alignable classes or miss-aligned classes can be aligned in a significantly improved manner. A well-known rely/guarantee method has been adopted to clearly express such relationships between newly-alignable classes. The proposed methodology has be implemented and evaluated on a realistic case study.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Systems between information and knowledge : In a memory management model of an extended enterprise

    Get PDF
    The research question of this thesis was how knowledge can be managed with information systems. Information systems can support but not replace knowledge management. Systems can mainly store epistemic organisational knowledge included in content, and process data and information. Certain value can be achieved by adding communication technology to systems. All communication, however, can not be managed. A new layer between communication and manageable information was named as knowformation. Knowledge management literature was surveyed, together with information species from philosophy, physics, communication theory, and information system science. Positivism, post-positivism, and critical theory were studied, but knowformation in extended organisational memory seemed to be socially constructed. A memory management model of an extended enterprise (M3.exe) and knowformation concept were findings from iterative case studies, covering data, information and knowledge management systems. The cases varied from groups towards extended organisation. Systems were investigated, and administrators, users (knowledge workers) and managers interviewed. The model building required alternative sets of data, information and knowledge, instead of using the traditional pyramid. Also the explicit-tacit dichotomy was reconsidered. As human knowledge is the final aim of all data and information in the systems, the distinction between management of information vs. management of people was harmonised. Information systems were classified as the core of organisational memory. The content of the systems is in practice between communication and presentation. Firstly, the epistemic criterion of knowledge is not required neither in the knowledge management literature, nor from the content of the systems. Secondly, systems deal mostly with containers, and the knowledge management literature with applied knowledge. Also the construction of reality based on the system content and communication supports the knowformation concept. Knowformation belongs to memory management model of an extended enterprise (M3.exe) that is divided into horizontal and vertical key dimensions. Vertically, processes deal with content that can be managed, whereas communication can be supported, mainly by infrastructure. Horizontally, the right hand side of the model contains systems, and the left hand side content, which should be independent from each other. A strategy based on the model was defined.Tutkimuksen tavoitteena oli määrittää, miten tietojärjestelmiä voidaan käyttää organisaatioiden tietämyksen hallintaan. Johtopäätöksenä voidaan sanoa, että järjestelmillä voidaan tukea, mutta ei korvata tietojohtamista. Tietojärjestelmiä voidaan käyttää lähinnä organisaation episteemisen tiedon muistina, prosessoitavan tiedon varastointiin. Oleellista lisäarvoa saadaan, jos viestintäteknologiaa käytetään tietojärjestelmien tukena. Kommunikaatiota ei kuitenkaan voida johtaa, sillä se ei perustu prosesseihin, vaan enintään työnkulkuun ja sitä vapaampaan viestintään. Hallitun informaation ja viestinnän välille syntyy knowformaatioksi nimetty kerros, lähinnä organisaatioiden lyhytkestoiseen muistiin. Uusi knowformaatio-käsite on käytännön tapaustutkimusten tulos. Vastaavaa ei aiemmissa tietojohtamisen tutkimuksissa ole esitetty. Tietojohtamisen kirjallisuuden taustaksi tutkittiin fysiikan, filosofian, viestinnän ja tietojenkäsittelytieteen luokitukset. Tapaustutkimuksissa tarkasteltiin useita datan hallinnan, dokumentaation ja tietojohtamisen järjestelmiä organisaation sisäisissä ryhmissä, organisaation laajuisesti sekä organisaation yhteistyökumppaneiden kanssa. Tapauksissa tutkittiin niin järjestelmien ominaisuudet kuin myös eri sidosryhmien kokemukset. Tutkimuksessa tietojärjestelmät luokiteltiin organisaation muistin ytimeen. Knowformaation kerrosta tarvitaan toisaalta koska filosofisen tiedon episteemistä kriteeriä ei edellytetä järjestelmien sisällöltä (eikä tietojohtamisen kirjallisuuden käsitemäärittelyissä) ja toisaalta koska tiedon uudelleenkonstruoinnissa merkitys muuttuu. Tulevien järjestelmien suunnitteluun tarvitaan uusi näkökulma, koska data, informaatio ja knowledge tasojen hierarkia ei erotu eri järjestelmätyyppien käyttäjien sosiaalisesti konstruoidussa todellisuudessa. Tieteen filosofian skaala positivistisesta konstruktivistiseen oli mallin muodostuksessa oleellinen, ja sen validiuden todentamisen jälkeen eksplisiittinen piiloinen -dikotomia mallinettiin uudelleen knowformaatio-käsitteen avulla. Uusi tietomalli ja knowformaatio-käsite tarvitaan työn päätuloksessa, jatketun organisaation muistin hallintamallissa. Sen ääripäihin kuluvat kommunikaatio, jota tuetaan, ja toisessa päässä prosessit, joita hallitaan. Kahden muun entiteetin, järjestelmien ja niiden sisällön, tulisi olla riippumattomia toisistaan. Knowformaatio elää näiden kokonaisuuksien implisiittisillä rajoilla, informaation ja tiedon välisellä harmaalla alueella

    Sense and reference on the web

    Get PDF
    This thesis builds a foundation for the philosophy of theWeb by examining the crucial question: What does a Uniform Resource Identifier (URI) mean? Does it have a sense, and can it refer to things? A philosophical and historical introduction to the Web explains the primary purpose of theWeb as a universal information space for naming and accessing information via URIs. A terminology, based on distinctions in philosophy, is employed to define precisely what is meant by information, language, representation, and reference. These terms are then employed to create a foundational ontology and principles ofWeb architecture. From this perspective, the SemanticWeb is then viewed as the application of the principles of Web architecture to knowledge representation. However, the classical philosophical problems of sense and reference that have been the source of debate within the philosophy of language return. Three main positions are inspected: the logicist position, as exemplified by the descriptivist theory of reference and the first-generation SemanticWeb, the direct reference position, as exemplified by Putnamand Kripke’s causal theory of reference and the second-generation Linked Data initiative, and a Wittgensteinian position that views the Semantic Web as yet another public language. After identifying the public language position as the most promising, a solution of using people’s everyday use of search engines as relevance feedback is proposed as a Wittgensteinian way to determine sense of URIs. This solution is then evaluated on a sample of the Semantic Web discovered by via using queries from a hypertext search engine query log. The results are evaluated and the technique of using relevance feedback from hypertext Web searches to determine relevant Semantic Web URIs in response to user queries is shown to considerably improve baseline performance. Future work for the Web that follows from our argument and experiments is detailed, and outlines of a future philosophy of the Web laid out

    Science Self-efficacy In Tenth Grade Hispanic Female High School Students

    Get PDF
    Historical data have demonstrated an underrepresentation of females and minorities in science, technology, engineering, and mathematics (STEM) professions. The purpose of the study considered the variables of gender and ethnicity collectively in relationship to tenth grade Hispanic females\u27 perception of their self-efficacy in science. The correlation of science self-efficacy to science academic achievement was also studied. Possible interventions for use with female Hispanic minority populations might help increase participation in STEM field preparation during the high school career. A population of 272 students was chosen through convenience sampling methods, including 80 Hispanic females. Students were administered a 27-item questionnaire taken directly from the Smist (1993) Science Self-efficacy Questionnaire (SSEQ). Three science self-efficacy factors were successfully extracted and included Academic Engagement Self-efficacy (M=42.57), Laboratory Self-efficacy (M=25.44), and Biology Self-efficacy (M=19.35). Each factor showed a significant positive correlation (p\u3c.01) to each of the other two factors. ANOVA procedures compared all female subgroups in their science self-efficacy perceptions. Asian/Pacific and Native American females had higher self-efficacy mean scores as compared to White, Black and Hispanic females on all three extracted science self-efficacy factors. Asian/Pacific females had the highest mean scores. No statistically significant correlations were found between science-self-efficacy and a measure of science achievement. Two high-ability and two low-ability Hispanic females were randomly chosen to participate in a brief structured interview. Three general themes emerged. Classroom Variables, Outside School Variables, and Personal Variables were subsequently divided into sub themes influenced by participants\u27 views of science, It was concluded that Hispanic female science self-efficacy was among the subgroups which self-scored the lowest. Asian/Pacific and Native American females fared better than White, Black, and Hispanic female counterparts respectively. Triangulation of interview and quantitative data showed that classroom factors, specifically academic engagement, influenced participant perceptions of science self efficacy the greatest. Suggested further studies on the impact of science self-efficacy and science achievement are discussed. Information gleaned from the continued study of science self-efficacy may influence the ability of traditionally underrepresented racial/ethnic females to persist in their science preparation and training in an effort to prevent leaving the STEM pipeline at this crucial juncture

    Library Publishing Toolkit

    Get PDF
    Both public and academic libraries are invested in the creation and distribution of information and digital content. They have morphed from keepers of content into content creators and curators, and seek best practices and efficient workflows with emerging publishing platforms and services. The Library Publishing Toolkit looks at the broad and varied landscape of library publishing through discussions, case studies, and shared resources. From supporting writers and authors in the public library setting to hosting open access journals and books, this collection examines opportunities for libraries to leverage their position and resources to create and provide access to content.The Library Publishing Toolkit is a project funded partially by Bibliographic Databases and Interlibrary Resources Sharing Program funds which are administered and supported by the Rochester Regional Library Council. The toolkit is a united effort between Milne Library at SUNY Geneseo and the Monroe County Library System to identify trends in library publishing, seek out best practices to implement and support such programs, and share the best tools and resources. Our goals include to: Develop strategies libraries can use to identify types of publishing services and content that can be created and curated by libraries. Assess trends in digital content creation and publishing that can be useful in libraries and suggesting potential future projects. Identify efficient workflows for distributing content for free online and with potential for some cost-recovery in print on demand markets. A list of chapters is available in the full record.https://knightscholar.geneseo.edu/idsproject-press/1002/thumbnail.jp

    Handling Soundness and Quality to Improve Reliability in LPS - A Case Study of an Offshore Construction Site in Denmark

    Get PDF
    corecore